Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The brain naturally employs dimensionality reduction for memory. Sleep is one example. Another, simpler one is reading -- how far back can you remember word for word when you are reading something? Maybe a sentence at most? But you still remember enough to understand what you're reading, because of efficient dimensionality reduction.

Some neural networks mimic this, such as LSTMs. But it's a poor mimicry at best. The brain has a natural, built-in selection mechanism. It seems to "know" what to remember and what to forget. How could we implement something like this in a deep neural network?

(This is key step to giving computers "personality". Which emerges from a selective set of memories and trained behavior)



The process in the article is pretty similar to dropout in neural nets. But instead of "knowing" what to get rid of, we randomly prune. The brain may do it randomly or intelligently, hard to say based on these studies.

See here for info on dropout: http://bit.ly/1mneaL5


Seems like it scores the pruning algorithm and adjusts it as well.


Anyone who is curious about what sudoscript is talking about they should watch the HBO series "West World". The series dives directly into this question.


I don't know how we implement it. But I've wondered for a while if we can't make a real AI until we create something that can sleep.


What scientific domains researched this ? I've been thinking about it for a decade at the shower thought level, but I'm eager to know more about this.


A second neural net which decides which memories are important and which ones are not.


It's homunculi all the way down...


How should we account for humans with photographic memory?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: