The brain naturally employs dimensionality reduction for memory. Sleep is one example. Another, simpler one is reading -- how far back can you remember word for word when you are reading something? Maybe a sentence at most? But you still remember enough to understand what you're reading, because of efficient dimensionality reduction.
Some neural networks mimic this, such as LSTMs. But it's a poor mimicry at best. The brain has a natural, built-in selection mechanism. It seems to "know" what to remember and what to forget. How could we implement something like this in a deep neural network?
(This is key step to giving computers "personality". Which emerges from a selective set of memories and trained behavior)
The process in the article is pretty similar to dropout in neural nets. But instead of "knowing" what to get rid of, we randomly prune. The brain may do it randomly or intelligently, hard to say based on these studies.
Anyone who is curious about what sudoscript is talking about they should watch the HBO series "West World". The series dives directly into this question.
Some neural networks mimic this, such as LSTMs. But it's a poor mimicry at best. The brain has a natural, built-in selection mechanism. It seems to "know" what to remember and what to forget. How could we implement something like this in a deep neural network?
(This is key step to giving computers "personality". Which emerges from a selective set of memories and trained behavior)