As a physicist, my reaction to this is how bizarre is that. Maybe he deserves a nobel prize but in physics?
Also arguing that NN is used in physics so we can argue nobel prize is okay is like asking for Stephan Wolfram to be awarded Nobel prize for Mathematica which is much more used in physics as a tool. And he is a physicist and had contributions to the field of numerical relativity (The reason he created Mathematica in the first place).
The royal science academy fucked up so much with this choice.
By this definition Claude Shannon (the father of Information Theory) clearly deserves a Nobel in Physics. The central concept in Information Theory is Entropy which is defined literally the same way as in Physics. And Shannon's Information Theory clearly revolutionized our life (tele-communications) much more than Hopfield network or Hinton's Boltzmann machine.
In 1939 Claude Shannon won the "wrong" Nobel prize -- The Alfred Noble Prize award presented by the American Society of Civil Engineers [0]. It causes a lot of confusion.
Claude Shannon never won a "real Nobel".
Someone changed the Wikipedia article today to call Hopfield a "physicist". Previously the article called him simply a scientist, because his main work wasn't limited to physics. I changed it back now, let's see if it holds up.
I suppose some might argue that being awarded the Nobel Prize in Physics is enough to call yourself a physicist.
…it does have the unfortunate implication, however, that nominations need not be restricted to physicists at all since any winner becomes a physicist upon receipt of the prize.
It’s sort of like the No True Scotsman but inverted, and with physicists instead of Scotsmen.
The Nobel Committee doesn’t represent the field of physics. I talked to a few former colleagues (theoretical physicists) just now and every one of them found this bizarre.
>where we all know that mathematics and/or CS deserve the honor
Or semiconductor manufacturers.
All the math and CS needed for AI can fit on a napkin, and had been known for 200+ years. It's the extreme scaling enabled by semiconductor science that really makes the difference.
That's absurd. The computer science needed for AI has not been known for 200 years. For example, transformers were only invented in 2017, diffusion models in 2015.
(When the required math was invented is a different question, but I doubt all of it was known 200 years ago.)
TBF backpropagation was introduced only in the 1970's, although in hindsight it's a quite trivial application of the chain rule.
There were also plenty of "hacks" involved to make the networks scale such as dropout regularization, batch normalization, semi-linear activation functions (e.g. ReLU) and adaptive stochastic gradient descent methods.
The maths for basic NNs is really simple but the practice of them is really messy.
Residual connections are also worth mentioning as an extremely ubiquitous adaptation, one will be hard-pressed to find a modern architecture that doesn't use those at least to some extent, to the point where the original Resnet paper sits at over 200k citations according to google scholar[1].
> All the math and CS needed for AI can fit on a napkin, and had been known for 200+ years.
This isn't really true. If you read a physics textbook from the early 1900s, they didn't really have multivariate calculus and linear algebra expressed as concisely as we do now. It would take several napkins. Plus, statistical mechanics was quite rudimentary, which is important for probability theory.
> If you read a physics textbook from the early 1900s, they didn't really have multivariate calculus and linear algebra expressed as concisely as we do now.
Also arguing that NN is used in physics so we can argue nobel prize is okay is like asking for Stephan Wolfram to be awarded Nobel prize for Mathematica which is much more used in physics as a tool. And he is a physicist and had contributions to the field of numerical relativity (The reason he created Mathematica in the first place).
The royal science academy fucked up so much with this choice.