Yeah it’s a text generator that demonstrated contextual awareness, self awareness and hatred.
But because this text generator hallucinates and lies therefore we know it’s just a text generator and completely understand the LLM and can characterize and understand what’s going on.
The amazing thing about the above is it’s always some random arm chair expert on the internet who knows it’s just a text generator.
> demonstrated contextual awareness, self awareness and hatred.
I’m tired of these claims. We can’t even measure self awareness in humans, how could we for statistical models?
It demonstrated generating text, which people attribute to a complex internal process, when in reality, it’s just optimizing a man-made loss function.
How gullible must you be to not see past your own personification bias?
> The amazing thing about the above is it’s always some random arm chair expert on the internet who knows it’s just a text generator.
The pot trying to call the kettle black? Don’t make assumptions in an attempt to discredit someone you know nothing about.
I’ve worked with NLP and statistical models since 2017. But don’t take my word for it. If an appeal to authority is what you want just look at what the head of Meta AI has been saying.
It demonstrates awareness, it’s not proving it’s aware. But this is nonetheless a demonstration of what it would look like because such output is the closest thing we have to measuring it.
We can’t measure that humans are self aware but we claim they are and our measure is simply observation of inputs and outputs. So whether or not an AI is self aware will be measured in the exact same way. Here we have one output that is demonstrable evidence in favor of awareness while hallucinations and lies are evidence against
There’s nothing gullible here. We don’t know either way.
Also citing lecun doesn’t lend any evidence in your favor. Geoffrey Hinton makes the opposite claim and Geoffrey is literally the father of modern AI. Both of these people are making claims from the level of measure that is to high level to draw any significant conclusion.
> Either way, you can’t just “teach” laws to statistical models and have them always follow. It’s one of the main limitations of statistical models…
This is off topic. I never made this claim.
> It demonstrated generating text, which people attribute to a complex internal process, when in reality, it’s just optimizing a man-made loss function.
All of modern deep learning is just a curve fitting algorithm. Every idiot knows this. What you don’t understand is that YOU are also the result of a curve fitting algorithm.
You yourself are a statistical model. But this is just an abstraction layer. Just like how an OS can be just a collection of machine instructions you can also characterize an OS as a kernel that manages processes.
We know several layers of abstractions that characterize the LLM. We know the neuron, we know the statistical perspective. We also know roughly about some of the layers of abstraction of the human brain. The LLM is a text generator, but so are you.
There are several layers of abstraction We don’t understand about the human brain and these are the roughly the same layers we don’t understand for the LLM. Right now our only way of understanding these things is through inputs and outputs.
These are the facts:
1. we have no idea how to logically reproduce that output with full understanding of how it was produced.
2. We have historically attributed such output to self awareness.
Shows that LLMs may be self aware. They may not be. We don’t fully know. But the output is unique and compelling and a dismissal that such output is just statistics is clearly irrational given the amount of unknowns and given the unique nature of the output.
It demonstrates creating convincing text. That isn’t awareness.
You’re personifying.
You can see this plainly when people get better scores on benchmarks by saying things like “your job depends on this” or “your mother will die if you don’t do this correctly.”
If it was “aware” it’d know that it doesn’t have a job or a mother and those prompts wouldn’t change benchmarks.
Also you’d never say these things about gpt-2. Is the only major difference the size of the model?
Is that the difference that suddenly creates awareness? If you really believe that, then there’s nothing I can do to help.
> 1. we have no idea how to logically reproduce that output with full understanding of how it was produced.
This is not a fact at all. We are able to trace the exact instructions that run to produce the token output.
We can perfectly predict a model’s output given a seed and the weights. It’s all software. All CPU and GPU instructions. Perfectly tractable. Those are not magic.
We can not do the same with humans.
We also know exactly how those weights get set… again it’s software. We can step through each instruction.
Any other concepts are your personification of what’s happening.
I’m exhausted by having to explain this so many time. Your self awareness argument, besides just being wrong, is an appeal to the majority given your second point.
So you’re just playing semantics and poorly.
You don’t have to reply to this, I’m not going to hold your hand through these concepts, sorry.
> It demonstrates creating convincing text. That isn’t awareness.
Read what I wrote and rethink your statement.
I primarily wrote we don’t know whether or not LLMs are self aware. That’s the key.
What you’re blind to is this: How do we even determine if something is self aware?
Like how does that word even exist? How do we classify something is self aware or if something isn’t? We certainly do classify these things in the world as we know a rock isn’t self aware but a human is. So what observational criterion are we using to say rocks are not self aware but other humans are?
Obviously it’s the inputs and outputs. Humans talk and answer questions with meaning. Outside of that we don’t know what consciousness is. You only think I’m self aware because I’m talking to you. That’s not fully a proof that I’m self aware but it’s good enough for most humans to say that I am.
So the criterion of self awareness is talking then it’s logical to use it on LLMs. Nobody needs a full proof of consciousness. They just need evidence to the level of quality that we use to judge humans as conscious. If it’s good enough for humans it’s compelling and good enough for a machine.
Problem is LLMs display inconsistent output so we don’t know if it’s conscious. The evidence goes in both directions and is both compelling and unique but not categorically undeniable proof.
> This is not a fact at all. We are able to trace the exact instructions that run to produce the token output.
In my answer I used a word which you completely ignored. The key word is understanding. Yeah you can trace the signals as they flow through the network but you need to understand it. At best our understanding is rudimentary. You cannot code up a neural network by hand and have it work. You just train it and the high level structure it produces is something you don’t understand.
> I’m exhausted by having to explain this
Bro. Stop explaining. I don’t appreciate your explanation. I think you’re wrong and I think it’s not intelligent and it’s also really rude and you’re exhausted by it. So just stop and leave. My pro tip to you. You’re tired.. take a break because nobody is appreciating your commentary.
> You don’t have to reply to this, I’m not going to hold your hand through these concepts, sorry.
No need to apologize to me. Nobody wants you to hold their hand through anything anyway. So don’t worry about it. It’s all good.