Tuesday, June 29, 2021

The rs-FC fMRI Law of Attraction (i.e., Resting-State Functional Connectivity of Speed Dating Choice)


Feeling starved for affection after 15 months of pandemic-mandated social distancing? Ready to look for a suitable romantic partner by attending an in-person speed dating event? Just recline inside this noisy tube for 10 minutes, think about anything you like, and our algorithm will Predict [the] Compatibility of a Female-Male Relationship!

This new study by Kajimura and colleagues garnered a lot of attention on Twitter, where it was publicized by @INM7_ISN (Simon Eickhoff) and @Neuro_Skeptic. The prevailing sentiment was not favorable (check the replies)...

 

 

Full disclosure: I was immediately biased against the claims made in this study...

This research emphasizes the utility of neural information to predict complex phenomena in a social environment that behavioral measures alone cannot predict.

...and have covered earlier attempts at linking speed dating choice to a proxy of neural activity. But I wanted to be fair and see what the authors did, since their results reflect an enormous amount of work.

Here I will argue that a 10 minute brain scan cannot predict who you will choose at a speed dating event. The resultant measures are even further away from identifying a compatible mate for you, since only 5% of speed dating interactions result in a relationship of any sort (6% for sexual relationships and 4% for romantic relationships, according to one study).

I was flabbergasted that anyone would think a “resting” state MRI scan (looking at “+” for 10 min) and its resulting pattern of correlated BOLD signal fluctuations would reflect a level of superficial desirability that can be detected by a potential mate at greater than chance level. Another disclosure: this is far from my field of expertise. So I searched the literature. Apparently, “patterns of functional brain activity during rest encode latent similarities (e.g., in terms of how people think and behave) that are associated with friendship” (Hyon et al., 2020). However, that study was conducted in a small town in South Korea (total population 860), allowing a detailed social network analysis. Plus, people knew each other well and experienced many of the same day to day events, which could shape their functional connectomes. Not exactly relevant for predicting strangers' speed dating choices, eh?

Another paper identified a “global personality network” based on data from 984 participants in the Human Connectome Project (Liu et al., 2019). The sample was large enough to support a training set of n=801 and a “hold-out” dataset (n=183) for validation purposes. The results supported the authors' “similar brain, similar personality” hypothesis. But in the dating world, how much do “similars” attract (compared to the popular saying, “opposites attract”)? Well why not construct (dis)similarity profiles between potential pairs by taking the absolute value of differences in functional connectivity (FC), and combine those with values of similarities in FC? Does that make sense?? And in order to arrive at this metric, there's a whole lot of machine learning (but with much smaller training sets)...

Identity Classification 

A separate sample of 44 individuals from the Human Connectome Project was used to construct the Similarity of Connectivity Pattern between pairs (Kajimura et al., 2021). These 44 participants had each been scanned twice, allowing 44 self-self pairs (Jessica at time 1 vs. Jessica at time 2), which were compared to 44 self-other pairs (Jessica at time 1 vs. Jennifer at time 2). Self-self “feature values” always show a positive correlation, and these were used to define “individual-specific information.”

26,680 feature values?

To start, 116 regions of interest (ROIs) were defined by Automated Anatomical Labeling (AAL). Pairwise comparisons of these for Self scan #1 vs. Self scan #2 (or vs. Other scan #2) resulted in a vector of 6,670 functional connectivities for each data point [(116 × 115)/2]. Then multiply this by four (!!) and you get 26,680 values fed into a machine learning classifier. Why four? Because the slow fluctuating BOLD signals were decomposed into four frequency bands for the classification procedure. Was this necessary? Does it add robustness, or merely more opportunities for false positive results?


Fig. 3 (Kajimura et al., 2021). Top 100 feature values, i.e. absolute values of differences between functional connectivity that contributed to identity classification for three frequency bands [the fourth was eliminated because the classifier could not distinguish between self-self and self-other pairs].

The machine learning algorithm was sparse logistic regression with elastic-net regularization (SLR-EN), which usually prevents overfitting, but I don't know if the algorithm can overcome 26,680 feature values with only 44 subjects. Maybe I'm misunderstanding (and others can correct me if I'm wrong), but the number of participants is rather low for SLR-EN given the number of input parameters? Then...
The classification accuracy was evaluated using a stratified k-fold cross-validation procedure. ... The ratio of the number of correctly classified labels was then obtained as the classification accuracy.


The regional results are below, showing a 7 x 7 brain network matrix with similarity in red (positive coefficients) and dissimilarity in blue (negative coefficients). We're still in the realm of correctly classifying self-self, so dissimilarites were considered artifacts of overfitting [but similarities were not?]. If the contribution from similar > dissimilar with binomial tests, this was considered an indicator of self. This was true of F1 (53 out of 67, p<.001) and F2 (52 out of 67, p<.001), but not F3, which was at chance (33 out of 67).


Fig. 4 (modified from Kajimura et al., 2021). Ratio of self-self classification connectivity in terms of brain networks. Red and blue matrices display the results of similarity- and dissimilarity-based contributions [at three frequency bands]. ... Vis, visual network; Som, somatosensory-motor network; Sal, salience network; Lim, limbic system; Con, executive control network; Def, default mode network; Cer, cerebellum.


Separate Statistical Analysis — a bevy of Pearsons 

Before we turn to speed dating, two more analyses are shown below for the identity classification study. The first involved a boatload of FDR-corrected Pearson’s correlations of the functional connectivity vectors for self–self pairs vs. self–other pairs (Fig 2A). The next shows the effectiveness of the machine learning (ML) algorithm in classifying these pairs (Fig 2B).

 

Fig. 2 (modified from Kajimura et al., 2021). Identity classification. (A) Similarities in overall functional connectivity profile was significantly higher for the self–self pair (dark-colored distribution) than the self–other pair (light-colored distribution) for all frequency bands. [I've included arrows to point out where they start to diverge] (B) Distribution of differences between ML classification accuracy.

As the authors predicted, self-self comparisons yielded more similar connectivites than self-other pairs. The ML algorithm identified self for three of the frequency bands (F1-F3) at greater than chance levels (12.4%. 14.8%, and 16.3% better than chance, respectively). However, the algorithm is still wrong a lot of the time. This is especially important for the matchmaking study...


Speed Dating

The authors provided a nice self-explanatory graphic presenting an overview of the Speed Dating study (click on image for a larger view). Data collection and analysis followed the flow of the Identity experiment.



Participants and Social Event

The participants were 42 heterosexual young adults (20-23 yrs), with 20 females and 22 males. Why these numbers were not perfectly matched, I do not know. The resting state fMRI scan was several days before the first speed dating session. [I'm assuming it was the first, because the Methods say there were three speed dating events. There was also a post-dating scan, which was described in another paper]. The three hour event was held in a large room where pairs of participants had 3 min long conversations with every member of the opposite sex. After each conversation, all the men moved to the next table. When all the speed dates were over, each person was asked to identify at least half of the opposite sex individuals they'd like to chat with again.

Well, there's a problem here — a requirement to select half the dates could result in less-than-optimal choices in some individuals. This requirement was necessary for sampling purposes, but it makes you wonder about the quality of the matches. Also, there was a strong possibility of unilateral matches — one individual thinks they've found their dream partner but that feeling was not reciprocated. When both members of a pair said "yes" they were considered compatible. Out of a total of 440 possible pairs, 158 were compatible and 282 incompatible.

The Compatible vs. Incompatible comparisons are the key findings of the study (Fig. 5, with A and B panels as above). Unlike the Identity comparison, compatible male-female pairs did not show more similar functional connnectivity patterns than incompatible pairs (Fig. 5A).

Well then...

“This indicates that the compatibility of female–male relationships is not necessarily represented by the similarity of functional connectivity patterns.”

Yes.

“Unlike identity classification, compatibility classification was supported by the considerable negative coefficients of the features” (shown in Fig. 6 of the paper). We shall not interpret this as opposites attract

 


 
Fig. 5 (Kajimura et al., 2021). Compatibility classification. (A) Similarity of overall functional connectivity profile. There was no significant difference between compatible (dark-colored distribution) and incompatible (light-colored distribution) pairs. (B) Distribution of differences between the classification accuracy with true labels of pairs and that with a randomized label for each frequency band. Vertical lines indicate chance levels.


Fig. 5B shows classification accuracy for compatible pairs, which was above chance for F1 and F2. Before investing in a commercial venture,  however, you should know that the benefit beyond guessing is only 5.47% and 4.95%, respectively. Thus, I disagree with the claim that...
...the current results indicate that resting-state functional connectivity has information about behavioral tendencies that two individuals actually exhibit during a dyadic interaction, which cannot be measured by self-report methods and thus may remain hidden unless we use neuroimaging methods concurrently.

To review the potential limitations of the study, we can't assess the quality of matches (meh vs. enthusiastic), we don't know what the participants were thinking about during their rsfMRI scan (see Gonzalez-Castillo et al., 2021), and we don't know their mental state during the scan. Although rs-FC fMRI is often considered a stable trait”, state factors and motion artifacts can affect the results on a given day (Geerligs et al., 2015). Indeed, ~35% of the time, the present paper was unsuccessful in classifying the same person run on two different days (and that's excluding one of four frequencies that was not above chance).

Is there something intrinsic encoded in BOLD signal fluctuations that can predict who we will find appealing (and a potential “match) after a three minute interaction?  Decisions at speed dating events are mostly based on physical attractiveness, so it seems very implausible to me.


Further Reading (the Speed Dating Collection)

The Neuroscience of Speed Dating Choice

The Electroencephalogram Cocktail Party

EEG Speed Dating

The Journal of Speed Dating Studies

Winner of Best Title

How I Meditated with Your Mother: Speed Dating at Temples and Shrines in Contemporary Japan

 

References

Geerligs L, Rubinov M, Henson RN. (2015). State and trait components of functional connectivity: individual differences vary with mental state. Journal of Neuroscience 35(41):13949-61.

Gonzalez-Castillo J, Kam JW, Hoy CW, Bandettini PA. (2021). How to Interpret Resting-State fMRI: Ask Your Participants. Journal of Neuroscience 41(6):1130-41.
 
Hyon R, Youm Y, Kim J, Chey J, Kwak S, Parkinson C. (2020). Similarity in functional brain connectivity at rest predicts interpersonal closeness in the social network of an entire village. Proceedings of the National Academy of Sciences 117(52):33149-60.

 
 
 

Subscribe to Post Comments [Atom]

Monday, May 31, 2021

Did dreams evolve to transcend overfitting?


A fascinating new paper proposes that dreams evolved to help the brain generalize, which improves its performance on day to day tasks. Incorporating a concept from deep learning, Erik Hoel (2021):

“...outlines the idea that the brains of animals are constantly in danger of overfitting, which is the lack of generalizability that occurs in a deep neural network when its learning is based too much on one particular dataset, and that dreams help mitigate this ubiquitous issue. This is the overfitted brian [sic] hypothesis.”

 

The Overfitted Brain Hypothesis (OHB) proposes that the bizarre phenomenology of dreams is critical to their functional role. This view differs from most other neuroscientific theories, which treat dream content as epiphenomenal — a byproduct of brain activity involved in memory consolidation, replay, forgetting, synaptic pruning, etc.  

In contrast, Hoel suggests that “it is the very strangeness of dreams in their divergence from waking experience that gives them their biological function.”

The hallucinogenic, category-breaking, and fabulist quality of dreams means they are extremely different from the “training set” of the animal (i.e., their daily experiences).
. . .

To sum up: the OBH conceptualizes dreams as a form of purposefully corrupted input, likely derived from noise injected into the hierarchical structure of the brain, causing feedback to generate warped or “corrupted” sensory input. The overall evolved purpose of this stochastic activity is to prevent overfitting. This overfitting may be within a particular module or task such a specific brain region or network, and may also involve generalization to out-of-distribution (unseen) novel stimuli.


Speaking of overfitting, I was reminded of Google's foray into artificial neural networks for image classification, which was all the rage in July 2015. The DeepDream program is a visualization tool that shows what the layers of the neural network have learned:

One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation.


The image above is characteristic of the hallucinogenic output from the DeepDream web interface, and it illustrates that the original training set was filled with dogs, birds, and pagodas.  DeepDream images inspired blog posts with titles like, Do neural nets dream of electric sheep? and Do Androids Dream of Electric Bananas? and my favorite, Scary Brains and the Garden of Earthly Deep Dreams.


Reference

Hoel E. (2021). The overfitted brain: Dreams evolved to assist generalization. Patterns 2(5):100244.

 


Subscribe to Post Comments [Atom]

Sunday, April 25, 2021

Hoarders and Collectors


Andy Warhol's collection of dental models

 
Pop artist Andy Warhol excelled in turning the everyday and the mundane into art. During the last 13 years of his life, Warhol put thousands of collected objects into 610 cardboard boxes. These Time Capsules were never sold as art, but they were meticulously cataloged by museum archivists and displayed in a major exhibition at the Andy Warhol Museum. “Warhol was a packrat. But that desire to collect helped inform his artistic point of view.” Yet Warhol was aware of his compulsion, and it disturbed him: “I'm so sick of the way I live, of all this junk, and always dragging home more.”

Where does the hobby of collection cross over into hoarding, and who makes this determination? 

Artists get an automatic pass into the realm of collectionism, no matter their level of compulsion. The Vancouver Art Gallery held a major exhibition of the works of Canadian writer and artist Douglas Coupland in 2014. One of the sections consisted of a room filled with 5,000 objects collected over 20 years and carefully arranged in a masterwork called The Brain. Here's what the collection looked like prior to assembly.
 

Materials used in the The Brain, 2000–2014, mixed-media installation with readymade objects. Courtesy of the Artist and Daniel Faria Gallery. Photo: Trevor Mills, Vancouver Art Gallery.


Hoarding, on the other had, lacks the artistic intent or deliberate organization of collection. Collectors may be passionate, but their obsessions/compulsions do not hinder their everyday function (or personal safety). According to Halperin and Glick (2003):
“Characteristically, collectors organize their collections, which while extensive, do not make their homes dysfunctional or otherwise unlivable. They see their collections as adding a new dimension to their lives in terms of providing an area of beauty or historical continuity that might otherwise be lacking.”
 
The differential diagnosis for the DSM-5 classification of Hoarding Disorder vs. non-pathological Collecting considers order and value of primary importance.



Fig. 2 (Nakao & Kanba, 2019).
If possessions are well organized and have a specific value, the owner is defined as a ‘collector.’ Medical conditions that cause secondary hoarding are excluded from Hoarding Disorder. The existence of comorbidities such as obsessive-compulsive disorder (OCD), autism spectrum disorder (ASD), and attention deficit hyperactivity disorder (ADHD) must be excluded as well.


I've held onto the wish of writing about this topic for the last eight months...


...because of the time I spent sorting through my mother's possessions between July 2020 and November 2020 after she died on July 4th. This process entailed flying across the country five times in a total of 20 different planes in the midst of a pandemic.
 
Although my mother showed some elements of  hoarding, she didn't meet clinical criteria. She had various collections of objects (e.g., glass shoes, decorator plates, snuff bottles, and ceremonial masks), but what really stood out were her accumulations — organized but excessive stockpiles of useful items such as flashlights, slippers, sweatshirts, kitchen towels, and watches (although most of the latter were no longer useful).
 

Ten pairs of unworn gardening gloves


During the year+ of COVID sheltering-in-place, some people wrote books, published papers, started nonprofits, engaged in fundraising, held Zoom benefit events, demonstrated for BLM, home-schooled their kids, taught classes, cared for sick household members, mourned the loss of their elder relatives, or endured COVID-19 themselves.
 
I dealt with the loss of a parent, along with the solo task of emptying 51 years of accumulated belongings from her home. To cope with this sad and lonely and emotionally grueling task, I took photos of my mother's accumulations and collections. It became a mini-obsession unto itself. I tried to make sense of my mother's motivations, but the trauma of her suffering and the specter of an unresolved childhood were too overwhelming. Besides, there's no computational model to explain the differences between Collectors, Accumulators and Hoarders.
 

Additional Reading

Compulsive Collecting of Toy Bullets

Compulsive Collecting of Televisions

The Neural Correlates of Compulsive Hoarding

Welcome to Douglas Coupland's Brain


References

Halperin DA, Glick J. (2003). Collectors, accumulators, hoarders, and hoarding perspectives. Addictive Disorders & Their Treatment 2(2):47-51.

Nakao T, Kanba S. (2019). Pathophysiology and treatment of hoarding disorder. Psychiatry Clin Neurosci. 73(7):370-375. doi:10.1111/pcn.12853





Subscribe to Post Comments [Atom]

Wednesday, March 31, 2021

Overinterpreting Computational Models of Decision-Making

Bell (1985)



Can a set of equations predict and quantify complex emotions resulting from financial decisions made in an uncertain environment? An influential paper by David E. Bell considered the implications of disappointment, a psychological reaction caused by comparing an actual outcome to a more optimistic expected outcome, as in playing the lottery. Equations for regret, disappointment, elation, and satisfaction have been incorporated into economic models of financial decision-making (e.g., variants of prospect theory).

Financial choices comprise one critical aspect of decision-making in our daily lives. There are so many choices we make every day, from the proverbial option paralysis in the cereal aisle...

...to decisions about who to date, where to go on vacation, whether one should take a new job, change fields, start a business, move to a new city, get married, get divorced, have children (or not).

And who to trust. Futuristic scenario below...


Decision to Trust

I just met someone at a pivotal meeting of the Dryden Commission. We chatted beforehand and discovered we had some common ground. Plus he's brilliant, charming and witty.

“Are you looking for an ally?” he asked. 


Neil, Laura and Stanley in Season 3 of Humans

 

Should I trust this person and go out to dinner with him? Time to ask my assistant Stanley, the orange-eyed (servile) Synthetic, an anthropomorphic robot with superior strength and computational abilities.


Laura: “Stanley, was Dr. Sommer lying to me just then, about Basswood?”


Stanley, the orange-eyed Synth: “Based on initial analysis of 16 distinct physiological factors, I would rate the likelihood of deceit or misrepresentation in Dr. Sommer's response to your inquiry at... EIGHTY-FIVE PERCENT.”

The world would be easier to navigate if we could base our decisions on an abundance of data and well-tuned weighting functions accessible to the human brain. Right? Like a computational model of trust and reputation or a model of how people choose to allocate effort in social interactions. Right?

I'm out of my element here, so this will limit my understanding of these models. Which brings me to a more familiar topic: meta-commentary on interpretation (and extrapolation).

Computational Decision-Making


My motivation for writing this post was annoyance. And despair. A study on probabilistic decision-making under uncertain and volatile conditions came to the conclusion that people with anxiety and depression will benefit from focusing on past successes, instead of failures. Which kinda goes without saying. The paper in eLife was far more measured and sophisticated, but the press release said:

The more chaotic things get, the harder it is for people with clinical anxiety and/or depression to make sound decisions and to learn from their mistakes. On a positive note, overly anxious and depressed people’s judgment can improve if they focus on what they get right, instead of what they get wrong...

...researchers tested the probabilistic decision-making skills of more than 300 adults, including people with major depressive disorder and generalized anxiety disorder. In probabilistic decision making, people, often without being aware of it, use the positive or negative results of their previous actions to inform their current decisions.


The unaware shall become aware. Further advice:

“When everything keeps changing rapidly, and you get a bad outcome from a decision you make, you might fixate on what you did wrong, which is often the case with clinically anxious or depressed people...”

...individualized treatments, such as cognitive behavior therapy, could improve both decision-making skills and confidence by focusing on past successes, instead of failures...

 

The final statement on individualized CBT could very well be true, but it has nothing to do with the outcome of the study (Gagne et al., 2021), wherein participants chose between two shapes associated with differential probabilities of receiving electric shock (Exp. 1), or financial gain or loss (Exp. 2).
 


With that out of the way, I will say the experiments and the computational modeling approach are impressive. The theme is probabilistic decision-making under uncertainty, with the added bonus of volatility in the underlying causal structure (e.g., the square is suddenly associated a higher probability of shocks). People with anxiety disorders and depression are generally intolerant of uncertainty. Learning the stimulus-outcome contingencies and then rapidly adapting to change was predictably impaired.

Does this general finding differ for learning under reward vs. punishment? For anxiety vs. depression? In the past, depression was associated with altered learning under reward, while anxiety was associated with altered learning under punishment (including in the authors' own work). For reasons that were not entirely clear to me, the authors chose to classify symptoms using a bifactor model designed to capture “internalizing psychopathology” common to both anxiety and depression vs. symptoms that are unique to each disorder [ but see Fried (2021) ]1

Overall, high scores on the common internalizing factor were associated with impaired adjustments to learning rate during the volatile condition, and this held whether the outcomes were shocks, financial gains, or financial losses. Meanwhile, high scores on anxiety-unique or depression-unique symptoms did not show this relationship. This was determined by computational modeling of task performance, using a hierarchical Bayesian framework to identify the model that best described the participants' behavior:

We fitted participants’ choice behavior using alternate versions of simple reinforcement learning models. We focused on models that were parameterized in a sufficiently flexible manner to capture differences in behavior between experimental conditions (block type: volatile versus stable; task version: reward gain versus aversive) and differences in learning from better or worse than expected outcomes. We used a hierarchical Bayesian approach to estimate distributions over model parameters at an individual- and population-level with the latter capturing variation as a function of general, anxiety-specific, and depression-specific internalizing symptoms. 


We've been living in a very uncertain world for more than a year now, often in a state of loneliness and isolation. Some of us have experienced loss after loss, deteriorating mental health, lack of motivation, lack of purpose, and difficulty making decisions. My snappish response to the press release concerns whether we can prescribe individualized therapies based on the differences between the yellow arrows on the left (“resilient people”) compared to the right (“internalizing people” — i.e., the anxious and depressed), given that the participants may not even realize they're learning anything.



 Footnote

1 I will leave it to Dr. Eiko Fried (2021) to explain whether we should accept (or reject) this bifactor model of “shared symptoms” vs. “unshared symptoms”.



References

Bell DE. (1985) Disappointment in decision making under uncertainty. Operations research 33(1):1-27.

Gagne C, Zika O, Dayan P, Bishop SJ. (2020). Impaired adaptation of learning to contingency volatility in internalizing psychopathology. Elife 9:e61387.

Further Reading

Fried EI. (2020). Lack of theory building and testing impedes progress in the factor and network literature. Psychological Inquiry 31(4):271-88.

Guest O, Martin AE. (2021) How computational modeling can force theory building in psychological science. Perspect Psychol Sci. Jan 22:1745691620970585.

van Rooij I, Baggio G. (2021). Theory before the test: How to build high-verisimilitude explanatory theories in psychological science. Perspect Psychol Sci. Jan 6:1745691620970604.

Subscribe to Post Comments [Atom]

Sunday, February 28, 2021

Overview of 'The Spike': an epic journey through failure, darkness, meaning, and spontaneity

from Princeton University Press (March 9, 2021)


THE SPIKE is a marvelously unique popular neuroscience book by Professor Mark Humphries, Chair of Computational Neuroscience at the University of Nottingham and Proprietor of The Spike blog on Medium. Humphries' novel approach to brain exposition is built around — well — the spike, the electrical signal neurons use to communicate. In this magical rendition, the 2.1 second journey through the brain takes 174 pages (plus Acknowledgments and Endnotes).

I haven't read the entire book, so this is not a proper book review. But here's an overview of what I might expect. The Introduction is filled with inventive prose like, “We will wander through the splendor of the richly stocked prefrontal cortex and stand in terror before the wall of noise emanating from the basal ganglia.” (p. 10).


Did You Know That Your Life Can Be Reduced To Spikes?

Then there's the splendor and terror of a life reduced to spikes (p. 3):

“All told, your lifespan is about thirty-four billion billion cortical spikes.”


Spike Drama

But will I grow weary of overly dramatic interpretations of spikes? “Our spike's arrival rips open bags of molecules stored at the end of the axon, forcing their contents to be dumped into the gap, and diffuse to the other side.” (p. 29-30).

Waiting for me on the other side of burst vesicles are intriguing chapters on Failure (dead end spikes) and Dark Neurons, the numerous weirdos who remain silent while their neighbors are “screaming at the top of [their] lungs.” (p. 83). I anticipate this story like a good mystery novel with wry throwaway observations (p. 82):

“Neuroimaging—functional MRI—shows us Technicolor images of the cortex, its regions lit up in a swirling riot of poorly chosen colors that make the Pantone people cry into their tasteful coffee mugs.”


Pantone colors of 2021 are gray and yellow

 

Wherever it ends up – with a mind-blowing new vision of the brain based on spontaneous spikes, or with just another opinion on predictive coding theory – I predict THE SPIKE will be an epic and entertaining journey. 

 


Subscribe to Post Comments [Atom]

Friday, January 29, 2021

Thoughts of Blue Brains and GABA Interneurons


An unsuccessful plan to create a computer simulation of a human brain within 10 years. An exhaustive catalog of cell types comprising a specific class of inhibitory neurons within mouse visual cortex. What do these massive research programs have in common? Both efforts were conducted by large multidisciplinary teams at non-traditional research institutions: the Blue Brain Project based in Lausanne, Switzerland and the Allen Institute for Brain Science in Seattle, Washington.

BIG SCIENCE is the wave of the future, and the future is now. Actually, that future started 15-20 years ago. The question should be, is there a future for any other kind of neuroscience?
 

Despite a superficial “BIG SCIENCE” similarity, the differences between funding sources, business models, leadership, operation, and goals of Blue Brain and the Allen Institute are substantial. Henry Markram, the “charismatic but divisive” visionary behind Blue Brain (and the €1 billion Human Brain Project) has been criticized for his “autocratic” leadership, “crap” ideas, and “ill-conceived, ... idiosyncratic approach to brain simulation” in countless articles. His ambition is undeniable, however:

“I realized I could be doing this [eg., standard research on spike-timing-dependent plasticity] for the next 25, 30 years of my career, and it was still not going to help me understand how the brain works.”

 

I'm certainly not a brilliant neuroscientist in Markram's league, but I commented previously on how a quest to discover “how the brain works” might be futile:

...the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).


In his infamous 2009 TED talk, Markram stated that a computer simulation of the human brain was possible in 10 years:
“I hope that you are at least partly convinced that it is not impossible to build a brain. We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.”


This claim would come back to haunt him in 2019, because (of course) he was nowhere close to simulating a human brain. In his defense, Markram said that his critics misunderstood and misinterpreted his grandiose proclamations.1

Blue Brain is now aimed at “biologically detailed digital reconstructions and simulations of the mouse brain.”

In Silico

Documentary filmmaker Noah Hutton2 undertook his own 10 year project that followed Markram and colleagues as they worked towards the goals of Blue Brain. He was motivated by that TED talk and its enthralling prediction of a brain in a supercomputer (hence in silico). Originally entitled Bluebrain and focused on Markram, the documentary evolved over time to include more realistic viewpoints and interviews with skeptical scientists, including Anne Churchland, Terry Sejnowski, Stanislas Dehaene, and Cori Bargmann. Ironically, Sebastian Seung was one of the loudest critics (ironic because Seung has a grandiose TED talk of his own, I Am My Connectome).


 

In Silico was available for streaming during the DOC NYC Festival in November (in the US only), and I had the opportunity to watch it. I was impressed by the motivation and dedication required to complete such a lengthy project.  Hutton had gathered so much footage that he could have made multiple movies from different perspectives.

Over the course of the film, Blue Brain/Human Brain blew up, with ample critiques and a signed petition from hundreds of neuroscientists (see archived Open Letter).

And Hutton grew up. He reflects on the process (and how he changed) at the end of film. He was only 22 at the start, and 10 years is a long time at any age.

Some of the Big Questions in In Silico:

  • How do you make sure all this lovely simulated activity would be relevant for an animal's behavior?
  • How do you build in biological imperfections (noise) or introduce chaos into your perfect pristine computational model? “Tiny mistakes” are critical for adaptable biological systems.

  • “You cannot play the same soccer game again,” said one of the critics (Terry Sejnowski, I think)

  • “What is a generic brain?”
  • What is the vision? 

The timeline kept drifting further and further into the future. It was 10 years in 2009, 10 years in 2012, 10 years in 2013, etc. 

Geneva 2019, and it's Year 10 only two Principals left, 150 papers published, and a model of 10 million neurons in mouse cortex. Stunning visuals, but still disconnected from behavior.

In the end, “What have we learned about the brain? Not much. The model is incomprehensible,” to paraphrase Sejnowski.


GABA Interneurons

Another brilliant and charismatic neuroscientist, Christof Koch, was interviewed by Hutton. “Henry has two personalities. One is a fantastic, sober scientist … the other is a PR-minded messiah.”

Koch is Chief Scientist of the MindScope Program at the Allen Institute for Brain Science, which focuses on how neural circuits produce vision. Another major unit is the Cell Types Program, which (as advertised) focuses on brain cell types and connectivity.3

The Allen Institute core principles are team science, Big Science, and open science. An impressive recent paper by Gouwens and 97 colleagues (2020) is a prime example of all three. Meticulous analyses of structural, physiological, and genetic properties identified 28 “met-types” of GABAergic interneurons that have congruent morphological, electrophysiological, and transcriptomic properties. This was winnowed down from more than 500 morphologies in 4,200 GABA-containing interneurons in mouse visual cortex. With this mind-boggling level of neuronal complexity in one specific class of cells in mouse cortex — along with the impossibility of “mind uploading” — my inclination is to say that we will never (never say never) be able to build a realistic computer simulation of the human brain.


Footnotes 

1 Here's another gem: “There literally are only a handful of equations that you need to simulate the activity of the neocortex.”

2 Most of Hutton's work has been as writer and director of documentary films, but I was excited to see that his first narrative feature, Lapsis, will be available for streaming next month. To accompany his film, he's created an immersive online world of interlinked websites that advertise non-existent employment opportunities, entertainment ventures, diseases, and treatments. It very much reminds me of the realistic yet spoof websites associated with the films Eternal Sunshine of the Spotless Mind (LACUNA, Inc.) and Ex Machina (BlueBook). In fact, I'm so enamored with them that they've appeared in several of my own blog posts.

3 Investigation of cell types is big in the NIH BRAIN Initiative ® as well.



References

Abbott A. (2020). Documentary follows implosion of billion-euro brain project. Nature 588:215-6.

[Alison Abbott covered the Blue Brain/Human Brain sturm und drang for years]

Gouwens NW, Sorensen SA, Baftizadeh F, Budzillo A, Lee BR, Jarsky T, Alfiler L, Baker K, Barkan E, Berry K, Bertagnolli D ... Zeng H et al. (2020). Integrated morphoelectric and transcriptomic classification of cortical GABAergic cells. Cell 83(4):935-53.

Waldrop M. (2012). Computer modelling: Brain in a box. Nature News 482(7386):456.


Further Reading

The Blue Brain Project (01 February 2006), by Dr. Henry Markram

“Alan Turing (1912–1954) started off by wanting to 'build the brain' and ended up with a computer. ... As calculation speeds approach and go beyond the petaFLOPS range, it is becoming feasible to make the next series of quantum leaps to simulating networks of neurons, brain regions and, eventually, the whole brain.”

A brain in a supercomputer (July 2009), Henry Markram's TED talk
“Our mission is to build a detailed, realistic computer model of the human brain. And we've done, in the past four years, a proof of concept on a small part of the rodent brain, and with this proof of concept we are now scaling the project up to reach the human brain.”


Blue Brain Founder Responds to Critics, Clarifies His Goals (11 Feb 2011), Science news

Bluebrain: Noah Hutton's 10-Year Documentary about the Mission to Reverse Engineer the Human Brain (9 Nov 2012), an indispensable interview with Ferris Jabr in Scientific American

European neuroscientists revolt against the E.U.'s Human Brain Project (11 July 2014), Science news

Row hits flagship brain plan (7 July 2014), Nature news

Brain Fog (7 July 2014), Nature editorial

Human Brain Project votes for leadership change (4 March 2015), Nature news

'In Silico:' Director Noah Hutton reveals how one neuroscientist's pursuit of perfection went awry (10 Nov 2020), another indispensable interview, this time with Nadja Sayej in Inverse

“They still haven’t even simulated a whole mouse brain. I realized halfway through the 10-year point that the human brain probably wasn’t going to happen.” ...

In the first few years, I followed only the team. Then, I started talking to critics.

 

 

Subscribe to Post Comments [Atom]

Wednesday, December 30, 2020

How the Brain Works


Every now and then, it's refreshing to remember how little we know about “how the brain works.” I put that phrase in quotes because the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).

First of all, whose brain are we trying to explain? Yours? Mine? The brain of a monkey, mouse, marsupial, monotreme, mosquito, or mollusk? Or C. elegans with its 306 neurons? “Yeah yeah, we get the point,” you say, “stop being so sarcastic and cynical. We're searching for core principles, first principles.”



In response to that tweet, definitions of “core principle” included:

  • Basically: a formal account of why brains encode information and control behaviour in the way that they do.
  • Fundamental theories on the underlying mechanisms of behavior. 
    • [Maybe “first principles” would be better?]
  • Set of rules by which neurons work?

 

Let's return to the problem of explanation. What are we trying to explain? Behavior, of course [a very specific behavior most of the time]: X behavior in your model organism. But we also want to explain thought, memory, perception, emotion, neurological disorders, mental illnesses, etc. Seems daunting now, eh? Can the same core principles account for all these phenomena across species? I'll step out on a limb here and say NO, then snort for asking such an unfair question. Best that your research program is broken down into tiny reductionistic chunks. More manageable that way.

But what counts as an “explanation”? We haven't answered that yet. It depends on your goal and your preferred level of analysis (à la three levels of David Marr):

computation – algorithm – implementation

 

 

Again, what counts as “explanation”? A concise answer was given by Lila Davachi during a talk in 2019, when we all still met in person for conferences:

“Explanations describe (causal) relationships between phenomena at different levels.”


from Dr. Lila Davachi (CNS meeting, 2019)
The Relation Between Psychology and Neuroscience
(see video, also embedded below)



UPDATE April 25, 2021: EXPLANATION IS IMPOSSIBLE, according to Rich, de Haan, Wareham, and van Rooij (2021), because "the inference problem is intractable, or even uncomputable":
"... even if all uncertainty is removed from scientific inference problems, there are further principled barriers to deriving explanations, resulting from the computational complexity of the inference problems."

Did I say this was a “refreshing” exercise? I meant depressing... but I'm usually a pessimist. (This has grown worse as I've gotten older and been in the field longer.)  
 
Are there reasons for optimism?

You can follow the replies here, and additional replies to this question in another thread starting here.

I'd say the Neuromatch movement (instigated by computational neuroscientists Konrad Kording and Dan Goodman) is definitely a reason for optimism!


Further Reading


The Big Ideas in Cognitive Neuroscience, Explained (2017)

... The end goal of a Marr-ian research program is to find explanations, to reach an understanding of brain-behavior relations. This requires a detailed specification of the computational problem (i.e., behavior) to uncover the algorithms. The correlational approach of cognitive neuroscience and even the causal-mechanistic circuit manipulations of optogenetic neuroscience just don't cut it anymore.

An epidemic of "Necessary and Sufficient" neurons (2018)

A miniaturized holy grail of neuroscience is discovering that activation or inhibition of a specific population of neurons (e.g., prefrontal parvalbumin interneurons) or neural circuit (e.g., basolateral amygdala → nucleus accumbens) is “necessary and sufficient” (N&S) to produce a given behavior.

Big Theory, Big Data, and Big Worries in Cognitive Neuroscience (from CNS meeting, 2018)
Dr. Eve Marder ... posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers.  [paraphrased below]:
  • How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
  • If you're looking for optimization (in [biological] neural networks), YOU ARE DELUSIONAL!
  • Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
  • ..so Cognitive Neuroscientists should be VERY WORRIED

 

 


The Neuromatch Revolution (2020)

“A conference made for the whole neuroscience community”

 

An Amicable Discussion About Psychology and Neuroscience (from CNS meeting, 2019)

  • the conceptual basis of cognitive neuroscience shouldn't be correlation
  • but what if the psychological and the biological are categorically dissimilar??

...and more!

The video below is set to begin with Dr. Davachi, but the entire symposium is included.


Subscribe to Post Comments [Atom]

eXTReMe Tracker