> Eventually, I dismissed Minsky’s theory as an interesting relic of AI history, far removed from the sleek deep learning models and monolithic AI systems rising to prominence.
That was my read of it when I checked it out a few years ago, obsessed with explicit rules based lisp expert systems and "good old fashioned AI" ideas that never made much sense, were nothing like how our minds work, and were obvious dead ends that did little of anything actually useful (imo). All that stuff made the AI field a running joke for decades.
This feels a little like falsely attributing new ideas that work to old work that was pretty different? Is there something specific from Minsky that would change my mind about this?
I recall reading there were some early papers that suggested some neural network ideas more similar to the modern approach (iirc), but the hardware just didn't exist at the time for them to be tried. That stuff was pretty different from the mainstream ideas at the time though and distinct from Minsky's work (I thought).
I think you may be mistaking Society of Mind with a different book. It's not about lisp or "good old fashioned AI" but about how the human mind may work - something that we could possibly simulate. It's observations about how we perform thought. The ideas in the book are not tied to a specific technology, but about how a complex system such as the human brain works.
I don't think we're talking about the same book. Society of Mind is definitely not an in-the weeds book that digs into things like lisp, etc. in any detail. Instead of changing your mind, I'd encourage you to re-read Minsky's book if you found my essay compelling, and ignore it if not.
You are surrounded by GOFAI programs that work well every moment of your life. From air traffic control planning, do heuristics based compiler optimization. GOFAI has this problem where as soon as they solve a problem and get it working, it stops being “real AI” in the minds of the population writ large.
Philosophy has the same problem, as a field. Many fields of study have grown out of philosophy, but as soon as something is identified, people say “well that’s not Philosophy, that’s $X” … and then people act like philosophy is useless and hasn’t accomplished anything.
Go read an AI textbook from the 80’s. It was all about optimizations and heuristics. That was the field.
Now if you write a SAT solver or a code optimizer you do t call it AI. But those algorithms were invented by AI researchers back when the population as a whole considered these sorts of things to be intelligent behavior.
I agree with you that it was called AI by the field, but that’s also why the field was a joke imo.
Until LLMs everything casually called AI clearly wasn’t intelligence and the field was pretty uninteresting - looked like a deadend with no idea how to actually build intelligence. That changed around 2014, but it wasn’t because of GOFAI, it was because of a new approach.
I completely agree with you and I am surprised by the praise in this thread. The entire research program that this books represents is dead for decades already.
It seems like you might be confusing "research programs" with things like "branding" and surface-level terminology. And probably missing the fact that society-of-mind is about architecture more than implementation, so it's pretty agnostic about implementation details.
The problem with your argument is that what you call agent is nothing like what Minsky envisioned. The agents in Minsky's world are very simple rule based entities ("nothing more than a few switches") that are composed in vast hierarchies. The argument Minsky is making is that if you compose enough simple agents in a smart way, an intelligence will emerge. What we use today as agents is nothing like that, each agents itself is considered intelligent (directly opposing Minsky's vision "none of our agents is intelligent"), while organized along very simple principles.
This is reminding me of what I thought I was remembering, I don't have the book anymore - but I remember starting it and reading a few chapters before putting it back on the shelf, it's core ideas seemed to have been shown to be wrong.
That was my read of it when I checked it out a few years ago, obsessed with explicit rules based lisp expert systems and "good old fashioned AI" ideas that never made much sense, were nothing like how our minds work, and were obvious dead ends that did little of anything actually useful (imo). All that stuff made the AI field a running joke for decades.
This feels a little like falsely attributing new ideas that work to old work that was pretty different? Is there something specific from Minsky that would change my mind about this?
I recall reading there were some early papers that suggested some neural network ideas more similar to the modern approach (iirc), but the hardware just didn't exist at the time for them to be tried. That stuff was pretty different from the mainstream ideas at the time though and distinct from Minsky's work (I thought).