Pages

Monday, May 31, 2021

Did dreams evolve to transcend overfitting?


A fascinating new paper proposes that dreams evolved to help the brain generalize, which improves its performance on day to day tasks. Incorporating a concept from deep learning, Erik Hoel (2021):

“...outlines the idea that the brains of animals are constantly in danger of overfitting, which is the lack of generalizability that occurs in a deep neural network when its learning is based too much on one particular dataset, and that dreams help mitigate this ubiquitous issue. This is the overfitted brian [sic] hypothesis.”

 

The Overfitted Brain Hypothesis (OHB) proposes that the bizarre phenomenology of dreams is critical to their functional role. This view differs from most other neuroscientific theories, which treat dream content as epiphenomenal — a byproduct of brain activity involved in memory consolidation, replay, forgetting, synaptic pruning, etc.  

In contrast, Hoel suggests that “it is the very strangeness of dreams in their divergence from waking experience that gives them their biological function.”

The hallucinogenic, category-breaking, and fabulist quality of dreams means they are extremely different from the “training set” of the animal (i.e., their daily experiences).
. . .

To sum up: the OBH conceptualizes dreams as a form of purposefully corrupted input, likely derived from noise injected into the hierarchical structure of the brain, causing feedback to generate warped or “corrupted” sensory input. The overall evolved purpose of this stochastic activity is to prevent overfitting. This overfitting may be within a particular module or task such a specific brain region or network, and may also involve generalization to out-of-distribution (unseen) novel stimuli.


Speaking of overfitting, I was reminded of Google's foray into artificial neural networks for image classification, which was all the rage in July 2015. The DeepDream program is a visualization tool that shows what the layers of the neural network have learned:

One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation.


The image above is characteristic of the hallucinogenic output from the DeepDream web interface, and it illustrates that the original training set was filled with dogs, birds, and pagodas.  DeepDream images inspired blog posts with titles like, Do neural nets dream of electric sheep? and Do Androids Dream of Electric Bananas? and my favorite, Scary Brains and the Garden of Earthly Deep Dreams.


Reference

Hoel E. (2021). The overfitted brain: Dreams evolved to assist generalization. Patterns 2(5):100244.

 


1 comment:

  1. This vaguely reminded my of an old axe I grind: the idea of "photographic memory" is ridiculous. As a long-time photography enthusiast, there's the obvious point that photography itself is nowhere near as good as (ideal) "photographic memory" (photographs are only sharp in the plane of focus) and even worse, the human eye is a terrible camera with only a tiny area of decent acuity: to actually record detail from a scene, you have to put the image of that area of detail on your fovea and focus your eye on it (for us old folks, that means moving our head so that we're looking through an appropriate place in our varifocal (or bi or tri) focal glasses). And that's before anyone notices that while it feels as though we are looking at an infinitely sharp photograph when we look at the world, we do it in saccades, one little corner of the universe at a time. FWIW, this video covers the literature on (and thus debunks) photographic and other extreme forms of memory quite nicely (without even getting into the optical issues): the bloke talks really fast, so it doesn't get boring.

    https://www.youtube.com/watch?v=tXXSXTZZMXA&ab_channel=TodayIFoundOut

    Somewhat more relevant to this post, as per a few of the snarky comments in "Spike!" (thanks for the recommendation), the CNN model really doesn't look like anything that's actually happening in any brains, mammalian, human, or otherwise, at least on this planet. So thinking that something that happens in CNNs might be indicative of what goes on in actual brains is coincidental at best. Well, coincidence sometimes coughs up interesting hints. Sometimes...

    ReplyDelete