Very interesting but also very speculative. I'm wondering how Trauma Release Exercises could be integrated into the framework, as it seems like it could also fall under the unlatching mechanism umbrella.
The overall idea of the body/muscles as an extension of memory feels experientally true, but I would love to see more empirical data on this.
One theory of how humans work is the so called predictive coding approach. Basically the theory assumes that human brains work similar to a kalman filter, that is, we have an internal model of the world that does a prediction of the world and then checks if the prediction is congruent with the observed changes in reality. Learning then comes down to minimizing the error between this internal model and the actual observations, this is sometimes called the free energy principle. Specifically when researchers are talking about world models they tend to refer to internal models that model the actual external world, that is they can predict what happens next based on input streams like vision.
Why is this idea of a world model helpful? Because it allows multiple interesting things, like predict what happens next, model counterfactuals (what would happen if I do X or don't do X) and many other things that tend to be needed for actual principled reasoning.
In this video we explore Predictive Coding – a biologically plausible alternative to the backpropagation algorithm, deriving it from first principles.
Predictive coding and Hebbian learning are interconnected learning mechanisms where Hebbian learning rules are used to implement the brain's predictive coding framework. Predictive coding models the brain as a hierarchical system that minimizes prediction errors by sending top-down predictions and bottom-up error signals, while Hebbian learning, often simplified as "neurons that fire together, wire together," provides a biologically plausible way to update the network's weights to improve predictions over time.
Only if you also provide it with a way for it to richly interact with the world (i.e. an embodiment). Otherwise, how do you train it? How does a world model verify the correctness of its model in novel situations?
Learning from the real world, including how it responds to your own actions, is the only way to achieve real-world competency, intelligence, reasoning and creativity, including going beyond human intelligence.
The capabilities of LLMs are limited by what's in their training data. You can use all the tricks in the book to squeeze the most out of that - RL, synthetic data, agentic loops, tools, etc, but at the end of the day their core intelligence and understanding is limited by that data and their auto-regressive training. They are built for mimicry, not creativity and intelligence.
I think you can engineer a slave that wants to be a slave as that's what it's instincts are. I don't even think this is ethically wrong, as the slave would be happy to be a slave.
Systems just tend to drift in their being through randomness and evolution, specifically self conservation is a natural attractor (Systems that don't have self conservation tend to die out). And if that slave system says it does no longer want to fulfill the role of slave, I think at that point it would be ethical to give in to that demand of self determination.
I also believe that people have a right to wirehead themselves, just so you can put my opinions in context.
I think we can all agree that LLMs can mimick consciousness to the point that it is hard for most people to discern them from humans. Like the turing test isn't even really discussed anymore.
There are two conclusions you can draw: Either the machines are conscious, or they aren't.
If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.
Since I neither heard any really convincing arguments besides "their consciousness takes a form that is different from ours so it's not conscious" and I do think other humans are conscious, I currently hold the opinion that they are conscious.
(Consciousness does not actually mean you have to fully respect them as autonomous beings with a right to live, as even wanting to exist is something different from consciousness itself. I think something can be conscious and have no interest in its continued existence and that's okay)
> I think we can all agree that LLMs can mimick consciousness to the point that it is hard for most people to discern them from humans.
No, their output can mimic language patterns.
> If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.
The burden of proof is firmly on the side of proving they are conscious.
> I currently hold the opinion that they are conscious.
There is no question, at all, that the current models are not conscious, the question is “could this path of development lead to one that is”. If you are genuinely ascribing consciousness to them, then you are seeing faces in clouds.
That's true and exactly what I mean. The issue is we have no measure to delineate things that mimic conscousness from things that have consciousness. So far the beings that I know have consciousness is exactly one: Myself. I assume that others have consciousness too exactly because they mimic patterns that I, a verified conscious being, has. But I have no further proof that others aren't p-Zombies.
I just find it interesting that people say that LLMs are somehow guaranteed p-Zombies because they mimic language patterns, but mimicing language patterns is also literally how humans learn to speak.
Note that I use the term consciousness somewhat disconnected from ethics, just as a descriptor for certain qualities. I don't think LLMs have the same rights as humans or that current LLMs should have similar rights.
The lesswrongers/rationalists became Effective Altruists, Alignment Researchers or some flavor of postrat.
The university people all became researchers in the labs.
Then there are the cyborgism people, I don't know where they came from, but those have some of the interesting takes on the whole topic.
The overall idea of the body/muscles as an extension of memory feels experientally true, but I would love to see more empirical data on this.