What if we took a much larger language model and far more invasive brain scans.
We train it on what the person is thinking and doing and their senses too. Pressure sensitive suit for touch, cameras in glasses for sight, mic for sound, voice recognition for what they're saying and in depth mo-cap for what they're doing.
We can now train the model on a good chunk of actions a human could take, and a decent chunk of their sensorium.
Now we take a lesson from the diffusers and apply noise to the brain scan data until the AI can do a decent job simulating a human on it's own.
Is a model good enough at that an agi? It still has no memory, but could we repurpose what is now noise but used to be brain scans to retrofit memory back into this model? Maybe a vector retrieval on the models old attention outputs? Could the mo-cap data be finetuned to a robot body? The voice to sound synthesis?
Greg Egan's short story "Learning to Be Me" takes place in a world where this is widely accepted. Very fun story, slightly spooky.
[Premise Spoilers, but really just the first few paragraphs]
No suits or external gadgets. Just a tiny device that gets inserted into the center of your brain at birth. Most parents have it done with their children. It learns to be your brain throughout the course of your life, and just before cognitive decline would kick in, they surgically scoop out your brain and hook everything up to the device.
Thanks for mentioning that, I didn't know Dennet did any fiction and that was a very good read.
The ending shares a lot with the climax of Egan's story, but aside from that and the idea of brain duplication I don't think they're too similar. There must have been earlier scifi stories with similar themes. I would 100% believe it was the inspiration for the Egan story though.
[Some spoilers for both stories]
Dennet's story is about the personal struggle of reconciling the self post-surgery while Egan's is more about the societal implications and struggles pre-surgery. In Egan's it is unknown if "you" die when your Dual is given control (the brain is destroyed) and characters deal with this in their own way. Some believe the conscious transfers, some believe you do die but don't care. Some, like the main character, go to support therapy groups because they don't know and it terrifies them. The idea of when in your life you give control to the Dual is a big deal to people.
The endings reach the same ideas but the way they get there and the themes they explore along the way are very different. I'd definitely recommend both of them.
Yeah, definitely didn't intend to suggest people skip Egan's story (nor qntm's often-linked Lena) - all are very provoking and worth considering on their own.
While we're on the topic of influences between literature and theories of mind, I've long wondered to what degree Dennett might have been influenced by Haruki Murakami. Consciousness Explained's writing overlaps the preparation of the English translation of Hard-Boiled Wonderland and Murakami's own time in New England (including specifically at Tufts). The work done by the Old Man in HBW is quite literally a "multiple drafts" view of a mind.
> Now we take a lesson from the diffusers and apply noise to the brain scan data until the AI can do a decent job simulating a human on it's own.
Honest question: how close are we to actually being able to do this now? Obviously the answer depends a bit on how you define "decent", but as someone pretty unfamiliar with most learning techniques beyond a very rudimentary high level, I have absolutely no idea if this is something that we can do now, will be able to do in months/years/decades, or if it's unlikely to happen in my lifetime at all. It seems like it's worth at least having some sort of answer there before spending a whole lot of time thinking about it. (Not saying it's not worth thinking about at all if the answer is that we aren't likely to ever see it in our lifetimes! But there's a difference between a pressing ethical issue and an interesting but hypothetical philosophical discussion)
There's nothing theoretically impossible about the suggestion, giving it a shot would be doable today if someone with resources really wanted to. I'm sure people at Neuralink or similar companies talk about stuff like this all the time.
But there's a big difference between "it wouldn't be theoretically impossible" and "the resulting model would actually work and be useful for anything". I would bet with p80 if you literally tried this with a lot of good researchers and resources, you would get a model that outputs a bunch of mashed up electrical brain data that, hey, actually looks pretty damn realistic. Maybe you could use that to better understand how the brain works, and improve the model. But I would almost guarantee you wouldn't suddenly be talking to a man in the computer. The compute isn't there. Maybe in 50 or 100 years.
Obligatory link to the very cool wiki-article-from-the-future style fiction that talks about this exact thing: https://qntm.org/mmacevedo
Why is the "question to terrify the AI ethicists" always some scifi bullshit and not the thing that's actually here.
This thing manifests a supposed interpretation from the barest of signals. (We know the language models can output something "meaningful" given basically noise.) Imagine being forced to speak in only the most likely sentences, or having everything you try to say instead transformed into a structurally-similar "more likely" utterance! Why is this not per se scary and unethical as hell?
What if we took a much larger language model and far more invasive brain scans.
We train it on what the person is thinking and doing and their senses too. Pressure sensitive suit for touch, cameras in glasses for sight, mic for sound, voice recognition for what they're saying and in depth mo-cap for what they're doing. We can now train the model on a good chunk of actions a human could take, and a decent chunk of their sensorium.
Now we take a lesson from the diffusers and apply noise to the brain scan data until the AI can do a decent job simulating a human on it's own.
Is a model good enough at that an agi? It still has no memory, but could we repurpose what is now noise but used to be brain scans to retrofit memory back into this model? Maybe a vector retrieval on the models old attention outputs? Could the mo-cap data be finetuned to a robot body? The voice to sound synthesis?