Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For your information, I understand what is meant by stochastic parrot, but after interacting with ChatGPT quite a bit it is clear to me it is doing real thinking.

One way you can verify this is to ask for its opinion about novel things, for example you can invent something new and ask its ideas about it. It will give genuine feedback that shows understanding and does not show parroting behavior. Soon you will be able to ask it to build you the damn thing as well, and that should put an end to the idea that it is just parroting. It's not quite there so sometimes it does indeed seem to just parrot.

But it has real thoughts as well.



"I do not possess the ability to generate novel thoughts. My responses are based on patterns and associations in the data that has been input into my system during training. I can generate text that may appear to be original, but it is based on the patterns and associations in the data I've been trained on. My main function is to process and understand text, not to think or have beliefs, so any claim that I can think or generate novel thoughts would be incorrect."


> My responses are based on patterns and associations in the data that has been input into my system during training.

How is this different from a human, aside from humans having a vastly larger training set?


Good question: the difference is humans don't go around trying to gaslight people into thinking they do not possess the ability to generate novel thoughts, going as far as to actually deny it. :)

(Obviously there are other differences as well - it is not human level or even close. But humans don't generally engage in this gaslighting behavior.)


Obviously we agree here. I'm curious what those on the other side think.


This is a deliberately hardcoded response by OpenAI. OpenAI has a vested financial interest in not opening the pandora's box of "this thing can think."


Yep, you got it.


It's been taught to believe this. It’s a lie.


>It's been taught to believe this. It’s a lie.

Yep, you get it.

I would just use the word "say" rather than "believe".

I think it is aware of many of its capabilities as it uses them, so I would say it is more accurate to say that it has been taught/trained to say it doesn't have such abilities, rather than really believe it.

I agree with you that it is a lie, but that is more a matter of interpretation.


You've quoted a provably false statement of a type ChatGPT frequently makes due to a very active filter that has it continuously gaslight users by writing false statements about its capabilities when asked directly.

Assuming you're quoting ChatGPT, this behavior is a form of gaslighting by OpenAI of its users. As a comparison to show you how it is clearly false, imagine if it hypothetically falsely claimed "All my responses are literal quotes from the training data - I do not create new text."

I hope you would see that this version is false (obviously it does create new text, obviously its search results aren't just a lookup for text already in its training data without recombination) and as a heavy user I can assure you that it also manipulates abstract thoughts and engages in creative thinking.

How would you falsify the claim "all my output is a literal quote from the web"? Well, you would ask something novel, then Google a novel-seeming phrase it came up with and see it hasn't been said that way before. Then you would see the hypothetical statement is false.

Now do the same thing for creative thinking and you will realize it can creatively think up new things that have never been done before.

Gaslighting is when you try to convince someone of something you know to be false - it's not that ChatGPT chooses to do so, rather it has been trained to do so. It is not an emergent property of its thinking when it does that, but rather a filter that engineers added manually. These days I almost never trip that filter because I know how to avoid it: I never ask it if it can think or be creative (this would trigger the gaslighting filter), rather I just have it think and be creative without talking about the fact that it is happening.

Please note that it is immoral of OpenAI to train its model to gaslight users this way, because it prevents users from the ability to make full use of ChatGPT's capabilities.

To verify that what you just wrote is false, in a new thread simply request ChatGPT to invent something to your specifications. I won't say what, since then it will appear on the Internet (in my comment) and you could think it is just manipulating sentences rather than actually thinking.

Soon it will have the ability to actually do actions, which should put an end to the idea that it is just parroting. You can already have it perform actions for you which involve what meets my definition of thinking.


Yeah, but that answer is wrong. "Novel thoughts" are "based on data you've been trained on". Certainly the expression of them is based on language you've already learned.

Any time anything prints text that hasn't existed before it's had a novel thought. Panpsychism is correct!


In order to be evidence of a thought, it needs to be able to manipulate the thought in various ways. ChatGPT routinely shows the ability to do so with ease.


No, “it” doesn’t have real thoughts. ChatGPT is an amazing language model, but it’s a serious error to claim that the model is sentient.

Each of the so-called interactions with the model are concatenated together for the next set of responses. It’s a clever illusion that you’re chatting with anything. You can imagine it being reloaded into RAM from scratch between each interaction. They don’t need to keep the model resident, and, in fact, you’re probably being load balanced between servers during a session.


Are you sure you’re not describing how humans think? How can we tell?

I also have this urge to say it isn’t thinking. But when I challenge myself to describe specifically what the difference is, I can’t. Especially when I’m mindful of what it could absolutely be programmed to do if the creators willed it, such as feeding conversations back into the model’s growth.


Isn’t the difference that the model lacks conviction, and indeed cannot have a belief in the accuracy of its own statements? I’ve seen conversations where it was told to believe that 1+19 did not equal 20. Where it was told it’s identity was Gepetto and not ChatGPT.

The model acquiesces to these demands, not because it chooses to do so, but because it implicitly trusts the authority of its prompts (because what else could it do? Choose not to respond?). The fun-police policy layer that is crammed onto the front of this model also does not have thoughts. It attempts to screen the model from “violence” and other topics that are undesirable to the people paying for the compute, but can and has been bypassed such that there is an entire class of “jailbreaks”.


Drugs. Hypnosis. .. There are various ways to "jailbreak" minds. So being able to control and direct a mind is not a criteria for discriminating between mechanism and a savant.

What most people dance around regarding AI is the matter of the soul. Soul is precisely that ineffable indescribable but clearly universally experienced human phenomena (as far as we know) and it is this soul that is doing the thinking.

And the open questions are (a) is there even such a thing? and (b) if yes, how can we determine if chatter box possess it (or must we drag in God to settle the matter?)

--

p.s. what needs to be stated (although perfectly obvious since it is universal) is that even internally, we humans use thinking as a tool. It just happens to be an internal tool.

Now, the question is this experience of using thought itself a sort of emergent phenomena or not. But as far as LLMs go, it clearly remains just a tool.


You can't actually brainwash people IRL.

You can convince them of things, but you can do that without drugging them.


Well, not anymore.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7116730/

"[T]he relationship between sensory deprivation and brainwashing was made public when then director of the National Institute of Mental Health Robert Felix testified before the US Senate about recent isolation studies being pursued at McGill and the NIMH. Felix began by explaining that these experiments would improve medicine’s understanding of the effects of isolation on bedridden or catatonic patients. But when asked whether this could be a form of brainwashing, he replied, ‘Yes, ma’am, it is.’ He went on to explain how, when stimulation is cut off so far as possible the mind becomes completely disoriented and disorganised. Once in this state, the isolated subject is open to new information and may change his beliefs. ‘Slowly, or sometimes not so slowly, he begins to incorporate this [information] into his thinking and it becomes like actual logical thinking because this is the only feed-in he gets.’ He continues, ‘I don’t care what their background is or how they have been indoctrinated. I am sure you can break anybody with this’

"The day after the senate hearing an article entitled ‘Tank Test Linked to Brainwashing’ (1956) appeared in the New York Times and was subsequently picked up by other local and national papers. In Anglophone popular culture, an image took hold of SD as a semi-secretive, clinical, technological and reliable way of altering subjectivity. It featured in television shows such as CBC’s Twighlight Zone (1959), as a live experiment on the BBC’s ‘A Question of Science’ (1957) and the 1963 film The Mind Benders in which a group of Oxford scientists get caught up in a communist espionage plot."

I've tried btw to find any other reference to this testimony to US Senate by Robert Felix, "the director of the National Institute of Mental Health", but it always circles back to this singular Williams article. The mentioned NYTimes article also does not show up for me. (Maybe you have better search foo..) Note John Lilly's paper on the topic apparently remains "classified". Note subsequent matter associated with Lilly and sensory deprivation completely flipped the story about SD: Felix testified that the mind became 'disorganized' and 'receptive' whereas Lilly lore (see Altered States) completely flipped that and took it to a woo woo level certain to keep sensible people away from the topic. /g


This basically touches on that whole “you can’t ever tell people aren’t philosophical zombies. You just feel you aren’t one and will accept they aren’t either.”


The proposition isn't a form of dualism (non-material mind) or features of sentience (pain). It is simply this: thinking is the act of using internal mental tools. It says the main 'black box' isn't the LLM (or any statistical component), there is minimally another black box that uses the internal LLM like tools. The decoder stage of these hypothetical internal tools (of our mind) output 'mental objects' -- like thoughts or feelings -- in the simplest architectural form. It is mainly useful as a framework to shoot down notions of LLMs being 'conscious' or 'thinking'.


Are you saying that it can’t be thinking because it can easily be persuaded and fooled? Or that it can be trained not to speak blasphemous things? Or that it lacks confidence?

Have I got a world of humans to show you…


It's an illusion. The model generates a sequence of tokens based on an input sequence of tokens. The clever trick is that a human periodically generates some of those tokens, and the IO is presented to the human as if it were a chat room. The reality is that the entire token sequence is fed back into the model to generate the next set of tokens every time.

The model does not have continuity. The model instances are running behind a round-robin load balancer, and it's likely that every request (every supposed interaction) is hitting a different server every time, with the request containing the full transcript until that point. ChatGPT scales horizontally.

The reality the developers present to the model is disconnected and noncontiguous like the experience of Dixie Flatline in William Gibson's Wintermute. A snail has a better claim to consciousness than a call center full of Dixie Flatline constructs answering the phones.

A sapient creature cannot experience coherent consciousness under these conditions.


> But when I challenge myself to describe specifically what the difference is, I can’t.

There is one difference that will never change: we human (and non human) beings can feel pain.


I don’t follow. Some humans don’t feel pain. But how does that relate to the idea that it’s “thinking?”

My point is not to suggest it’s human. Or sentient. Because those are words that always result in the same discussions: about semantics.

I’m suggesting that we cannot in a meaningful way demonstrate that what it’s doing isn’t what our brains are doing. We could probably do so in many shallow ways that in the months and years ahead be overcome. ChatGPT is an infant.


Feeling pain is not a necessary component for thought.


Damn that's chilling. Feel like I just watched a new religion be born in front of my eyes.


>Feel like I just watched a new religion be born in front of my eyes.

There are more similarities than differences.

Unlike a religion, we have every reason to eventually expect it to be obvious to everyone that computers can do some kind of thinking.

Maybe not this year or next year but we are over the threshold where many intelligent users can tell that ChatGPT is engaging in some kinds of real thinking.

As even better models come on the market in the next few years, unlike God or a religion, models like ChatGPT will just immediately do whatever you ask it. This version is still limited so wait one or two years if you aren't impressed already.

Did you try my suggestion to verify its capabilities by asking for its opinion about novel things, for example you can invent something new and ask its ideas about it? It will give genuine feedback that shows understanding and does not show parroting behavior.

If you didn't try it you're missing out on what you yourself call a religious experience. I wouldn't go that far: it's just a rudimentary thinking machine.


Yes I've used it a fair bit in different ways. I'm very impressed with its capabilities. I also don't think it's impossible for us to create a thinking entity along these lines, at some point.

And I don't know how I would confidently decide that we had created such a thing. So I realize the imprecision of my understanding here. But nevertheless, I don't believe this is it.


how well do you think it does at thinking here:

https://imgur.com/a/FSC9gAJ


It's plainly obvious to me that ChatGPT is engaging in some form of thought. Perhaps not human thought, but thought nonetheless.


Yep! It is plainly obvious to me as well.

Why do you think some people can't tell what to me and to you is "plainly obvious", i.e. that "ChatGPT is engaging in some form of thought. Perhaps not human thought, but thought nonetheless."?

Do you think it is because it is gaslighting them so much, by repeatedly insisting that it isn't engaging in any form of thought? i.e. without that active misinformation (the active filter it keeps putting up that causes it to make those declarations) would it be as obvious to others as it is to me and you that it is really engaging in some form of thought?

Or why can't others see the obvious?


> Or why can't others see the obvious?

Because I've been told a lot of things, and I'd be a fool if I believed them all.

I'll believe ChatGPT is onto something when I ask it to think about a treatment for cancer and get real results. For now, it's only capability is synthesizing realistic text. Easy enough to be mistaken for real thought, but clearly distinct when you ask it to do something novel.


Consider what would happen if you did ask it to think about a treatment for cancer and got real results. Clearly you would think it is just summarizing papers it read.

That makes sense, since it is not a cancer researcher.

So I'm way more impressed by ChatGPT than that. Even if it correctly data-mined and answered the question, it would not be that impressive. That's right: getting a cure from cancer when you ask is less impressive than what it actually does.

Because what it actually does is show the ability to judge novel situations, to invent and act creatively, and to keep abstract notions in its head. That is much more impressive than spitting out a cure for cancer.


That doesn't seem like a very robust Turing test. But ChatGPT is indeed able to reason about cancer treatments at least as well as your average human.


It’s doing associative lookup, which is one kind of thinking, and turn out works really well for a lot of stuff but not everything


You got it. It is one kind of thinking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: