Neural networks originated from coarse-grained analogies to a 1940s understanding of neurons. That’s about where the neuroscience connection ended. People have tried to make connections since then, but it’s almost always post-hoc.
If you listen to recent talks by Hinton (Capsule networks), LeCun (self-supervised learning), and Bengio (system 2 deep learning), as well as others, you'll find plenty of references to neuroscience, psychology, cognitive science, etc. There are always implementation differences, but the inspiration from brains is always there. The point of the book (which might be wrong, btw) is that the brain itself is an agent of the gene, which has evolved out of the need for better survival mechanisms. Therefore, it is suggesting that anything that has been modeled after the brain is—by extension—an agent of the main source of human intelligence (because it serves the goals of humans) and not intelligent by itself.
Hierarchical modeling. Spiking neural nets. Fire together, wire together. Convolution. Boltzmann nets. Autoencoding. LSTM gating. Attention, transformers, gans, etc.
GOFAI might not pull inspiration from the brain, but connectionist style AI, which represents the vast majority of ai being produced and operated, almost exclusively uses brains for inspiration.
Sure, if you define "intelligence" as "solving problems in a variety of environments to accomplish your own goals and self-replicate", then no, modern AI is not intelligent. You have just redefined intelligence so that only living beings can be intelligent.
A computer virus that evolves and spreads and is too elusive for humans to eliminate would fit that scenario. I think the point is that as long as humans defines what the AI should do it will never be intelligent, it will only become intelligent when we lose control of it.
I think that was his point, not sure I agree with it but at least it isn't trivially wrong.
Isn't that just moving the goalpost to a higher level of abstraction? You could envision an AI were the only instruction is spread yourself. In that quest it could create a vastly more impressive society of AI agents with their own culture.
Humans have instructions too, which are rooted in evolution and biology. It's not at all clear to me how an AI that follows an instruction must, per definition, be considered unintelligent. That would imply Humans are unintelligent.
Well even natural viruses are not considered technically "living", so there's a lot of confusion between what current science calls a thing and what the common understanding of that thing is.
My take on intelligence over the past 20y has been it is high quality / efficient search of immense state spaces.
"Solving intelligence" as a famous corporation motto, might just be improving state-space search.
Humans are incredible at state space search, it's obvious as soon as you consider the potential data pointsnof any problem we face every day, from washing dishes to designing algorithms.
Also, the above definition is essentially the original (50s-70s) definition of AI, and I reckon that school of thought didn't end too well. By that definition, current computers are way more intelligent than humans under almost every benchmark. SAT- and SMT- solvers can tackle huge problems without breaking a sweat.
We humans, by comparison, are pretty crap at state space search, but we shine when said state space is dynamic or the search is heavily context-dependent.
But that's literally what ML training does. Just like a human, neural nets learn heuristics to take advantage of the fact that of all the possible mappings of inputs to outputs, there are actually vanishingly few output states that are valid. Arguably all learning is indeed a reduction of state space, be it by human or machine.
??? This isn’t true. The author doesn’t seem to have an understanding of modern AI.