"Critical" has a precise meaning in these kinds of systems; it essentially means when correlation lengths diverge (or, with a finite brain, become the size of the whole). In physical systems this happens at 2nd order phase transitions. Unfortunately most familiar phase transitions are first-order (boiling and freezing, for example) but the development of macroscopic magnetism as iron cools is an example.
Away from the critical point the dynamics become either
1) too strong, meaning that information has a hard time getting from one spot to another because it's in too much traffic
2) too weak, meaning that information has trouble being processed, because the traffic is so light that cars don't spend enough time together to come to equilibrium
In these cases the correlations will be short-distance, in contrast to critical.
> "Critical" has a precise meaning in these kinds of systems; it essentially means when correlation lengths diverge (or, with a finite brain, become the size of the whole).
If someone doesn't know what "critical" means, they also don't know what a "correlation length" is, so I don't think this clarification is very helpful. Who was the intended audience?
We don't just learn a new subject through simpler ELI5 explanations.
We also learn by immersing ourselves further into the subject (like here, were we were given an alternative, still elaborate explanation), until things "click".
In immersive learning (like how kids learn language and most other wordly things naturally outside of explicit teaching) we also get to understand the meaning of an unknown term by compounding other unknown terms, and making correlations, connections, and deductions.
The meaning of critical in complex systems is accessible only if you studied physics or theoretical neuroscience or something related. I'm pretty sure correlations are familiar to most people in STEM, such as the average reader of HN
Presumably the “correlation length” is something kind of like “some length where there is much larger correlation of pairs of things at most this length apart, than there is between pairs of things that are much longer than this length” (but that’s largely just a guess)
I think you've assumed that they've just substituted some jargon with some other jargon, but that's not really what happened here. Rather, "critical" is left undefined, even though it's an ambiguous word; evanb merely provided a definition.
"The critical brain hypothesis suggests that neural networks do their best work when connections are not too weak or too strong." is actually a tautology, no?
Without a crisp explanation for what "too weak" or "too strong" mean, this is just saying "Neural networks work best when connections couldn't be changed to make them work better."
It's not a tautology, because it isn't clear that there is some threshold beyond which connections are too weak or too strong. I might think that more connections are more good, for instance.
So it might not be possible to reach a state with connections that are "too strong" but logically if you could you'd have to define them by suboptimal performance.
Imagine any dinner conversation in America could enter into any other dinner conversation as desired - it would probably saturate into a meaningless cacophony and all relevance and context would get lost.
> I might think that more connections are more good, for instance.
Even still, there are a finite number of nodes, so there is a max number of connections. So the "just right" amount could be everything connected to everything, and the statement would still be true.
Suppose your counter example would be like “superconductivity works best at temperatures that are neither too hot nor too cold” or “Carnot efficiency of heat engines is at its maximum when the temperature delta is neither too large nor too small”.
I had a similar issue with the article. Essentially the information content seems to boil down to "there is a state where the brain works the best". For experts there is probably a lot to learn from the technicalities of this research, but the article leaves a layman a bit cold.
As far as I understood, they are talking about a certain homeostatic state of the brain which is optimal for the right amount of signal transmission between neurons. If a set of neurons are on the critical point, they neither fizzle out the signal nor amplify it.
I think this is somewhat related to criticality in nuclear fission in nuclear reactors - if a reactor is sub-critical, then any reaction fizzles out; if it is super-critical, then the reaction is exponential. If the reactor is close to the critical point, then the reaction is sustained at a relatively constant rate.
The brain probably couldn't function very well if their neurons were generally super-critical and all fire very quickly if one fires. And the brain probably wouldn't function well if the neurons were all sub-critical, and any attempt at transmitting a signal would die out.
The researchers studying this critical point theory for the brain also seem to be seeing parallels in brain activity and other critical systems. Particularly, that a system at a critical point can be easily tipped over the critical point with a small input or simply a change in one of its components. And that some systems in nature seem to use this to stay organised.
The hypothesis is very cool and seems to be sensible, but the article and video depart a bit too much into adjacent topics to offer a good introduction to the subject. For example, I couldn't find the relevance of the critical point between random and organised systems in metal atoms. The example of seeing predators in nature was also a bit contrived although it illustrates how a small change in a large neural net can produce large meaningful organisation in the activation of neurons.
It feels very intuitive that a brain needs to operate at this perfect balance between super-criticality and sub-criticality, I can understand why some scientists feel so strongly about it.
I would welcome corrections if I did not understand something as intended.
To me the fact that more information is transmitted with an intermediate number of connections than with a strongly connected network wasn't immediately obvious at first glance. I guess there is a link to entropy, i.e. how surprised can you be by the information received at one end of the network given its connectivity.
reminds me of definitions of getting into a flow while performing a task: the task shouldn't be so hard that it feels impossible, nor so easy as not worth any mental effort and tedious.
The man compared it to playing chess with someone with your rating or a rating slightly up or down from it. Much more engaging than playing a CHESS GOD, or a totally first time player.
Not really. Signals have a finite power level. If you open all the lanes all the time, you'll get a very attenuated signal throughout the entire network. If some connections are stronger than others, that's when you can actually see interesting behavior.
I'm wondering if there is a link to the debate between competency (subcritical) vs. equity (supercritical) based criteria in hiring and admissions. Should we seek to achieve the critical middle point?
Isn't this just about as obvious as the fact that traffic flows best when traffic lights are neither always red nor always green?