Monday, May 31, 2021

Did dreams evolve to transcend overfitting?


A fascinating new paper proposes that dreams evolved to help the brain generalize, which improves its performance on day to day tasks. Incorporating a concept from deep learning, Erik Hoel (2021):

“...outlines the idea that the brains of animals are constantly in danger of overfitting, which is the lack of generalizability that occurs in a deep neural network when its learning is based too much on one particular dataset, and that dreams help mitigate this ubiquitous issue. This is the overfitted brian [sic] hypothesis.”

 

The Overfitted Brain Hypothesis (OHB) proposes that the bizarre phenomenology of dreams is critical to their functional role. This view differs from most other neuroscientific theories, which treat dream content as epiphenomenal — a byproduct of brain activity involved in memory consolidation, replay, forgetting, synaptic pruning, etc.  

In contrast, Hoel suggests that “it is the very strangeness of dreams in their divergence from waking experience that gives them their biological function.”

The hallucinogenic, category-breaking, and fabulist quality of dreams means they are extremely different from the “training set” of the animal (i.e., their daily experiences).
. . .

To sum up: the OBH conceptualizes dreams as a form of purposefully corrupted input, likely derived from noise injected into the hierarchical structure of the brain, causing feedback to generate warped or “corrupted” sensory input. The overall evolved purpose of this stochastic activity is to prevent overfitting. This overfitting may be within a particular module or task such a specific brain region or network, and may also involve generalization to out-of-distribution (unseen) novel stimuli.


Speaking of overfitting, I was reminded of Google's foray into artificial neural networks for image classification, which was all the rage in July 2015. The DeepDream program is a visualization tool that shows what the layers of the neural network have learned:

One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation.


The image above is characteristic of the hallucinogenic output from the DeepDream web interface, and it illustrates that the original training set was filled with dogs, birds, and pagodas.  DeepDream images inspired blog posts with titles like, Do neural nets dream of electric sheep? and Do Androids Dream of Electric Bananas? and my favorite, Scary Brains and the Garden of Earthly Deep Dreams.


Reference

Hoel E. (2021). The overfitted brain: Dreams evolved to assist generalization. Patterns 2(5):100244.

 


Subscribe to Post Comments [Atom]

0 Comments:

Post a Comment

<< Home

eXTReMe Tracker