Sunday, April 25, 2021

Hoarders and Collectors


Andy Warhol's collection of dental models

 
Pop artist Andy Warhol excelled in turning the everyday and the mundane into art. During the last 13 years of his life, Warhol put thousands of collected objects into 610 cardboard boxes. These Time Capsules were never sold as art, but they were meticulously cataloged by museum archivists and displayed in a major exhibition at the Andy Warhol Museum. “Warhol was a packrat. But that desire to collect helped inform his artistic point of view.” Yet Warhol was aware of his compulsion, and it disturbed him: “I'm so sick of the way I live, of all this junk, and always dragging home more.”

Where does the hobby of collection cross over into hoarding, and who makes this determination? 

Artists get an automatic pass into the realm of collectionism, no matter their level of compulsion. The Vancouver Art Gallery held a major exhibition of the works of Canadian writer and artist Douglas Coupland in 2014. One of the sections consisted of a room filled with 5,000 objects collected over 20 years and carefully arranged in a masterwork called The Brain. Here's what the collection looked like prior to assembly.
 

Materials used in the The Brain, 2000–2014, mixed-media installation with readymade objects. Courtesy of the Artist and Daniel Faria Gallery. Photo: Trevor Mills, Vancouver Art Gallery.


Hoarding, on the other had, lacks the artistic intent or deliberate organization of collection. Collectors may be passionate, but their obsessions/compulsions do not hinder their everyday function (or personal safety). According to Halperin and Glick (2003):
“Characteristically, collectors organize their collections, which while extensive, do not make their homes dysfunctional or otherwise unlivable. They see their collections as adding a new dimension to their lives in terms of providing an area of beauty or historical continuity that might otherwise be lacking.”
 
The differential diagnosis for the DSM-5 classification of Hoarding Disorder vs. non-pathological Collecting considers order and value of primary importance.



Fig. 2 (Nakao & Kanba, 2019).
If possessions are well organized and have a specific value, the owner is defined as a ‘collector.’ Medical conditions that cause secondary hoarding are excluded from Hoarding Disorder. The existence of comorbidities such as obsessive-compulsive disorder (OCD), autism spectrum disorder (ASD), and attention deficit hyperactivity disorder (ADHD) must be excluded as well.


I've held onto the wish of writing about this topic for the last eight months...


...because of the time I spent sorting through my mother's possessions between July 2020 and November 2020 after she died on July 4th. This process entailed flying across the country five times in a total of 20 different planes in the midst of a pandemic.
 
Although my mother showed some elements of  hoarding, she didn't meet clinical criteria. She had various collections of objects (e.g., glass shoes, decorator plates, snuff bottles, and ceremonial masks), but what really stood out were her accumulations — organized but excessive stockpiles of useful items such as flashlights, slippers, sweatshirts, kitchen towels, and watches (although most of the latter were no longer useful).
 

Ten pairs of unworn gardening gloves


During the year+ of COVID sheltering-in-place, some people wrote books, published papers, started nonprofits, engaged in fundraising, held Zoom benefit events, demonstrated for BLM, home-schooled their kids, taught classes, cared for sick household members, mourned the loss of their elder relatives, or endured COVID-19 themselves.
 
I dealt with the loss of a parent, along with the solo task of emptying 51 years of accumulated belongings from her home. To cope with this sad and lonely and emotionally grueling task, I took photos of my mother's accumulations and collections. It became a mini-obsession unto itself. I tried to make sense of my mother's motivations, but the trauma of her suffering and the specter of an unresolved childhood were too overwhelming. Besides, there's no computational model to explain the differences between Collectors, Accumulators and Hoarders.
 

Additional Reading

Compulsive Collecting of Toy Bullets

Compulsive Collecting of Televisions

The Neural Correlates of Compulsive Hoarding

Welcome to Douglas Coupland's Brain


References

Halperin DA, Glick J. (2003). Collectors, accumulators, hoarders, and hoarding perspectives. Addictive Disorders & Their Treatment 2(2):47-51.

Nakao T, Kanba S. (2019). Pathophysiology and treatment of hoarding disorder. Psychiatry Clin Neurosci. 73(7):370-375. doi:10.1111/pcn.12853





Subscribe to Post Comments [Atom]

Wednesday, March 31, 2021

Overinterpreting Computational Models of Decision-Making

Bell (1985)



Can a set of equations predict and quantify complex emotions resulting from financial decisions made in an uncertain environment? An influential paper by David E. Bell considered the implications of disappointment, a psychological reaction caused by comparing an actual outcome to a more optimistic expected outcome, as in playing the lottery. Equations for regret, disappointment, elation, and satisfaction have been incorporated into economic models of financial decision-making (e.g., variants of prospect theory).

Financial choices comprise one critical aspect of decision-making in our daily lives. There are so many choices we make every day, from the proverbial option paralysis in the cereal aisle...

...to decisions about who to date, where to go on vacation, whether one should take a new job, change fields, start a business, move to a new city, get married, get divorced, have children (or not).

And who to trust. Futuristic scenario below...


Decision to Trust

I just met someone at a pivotal meeting of the Dryden Commission. We chatted beforehand and discovered we had some common ground. Plus he's brilliant, charming and witty.

“Are you looking for an ally?” he asked. 


Neil, Laura and Stanley in Season 3 of Humans

 

Should I trust this person and go out to dinner with him? Time to ask my assistant Stanley, the orange-eyed (servile) Synthetic, an anthropomorphic robot with superior strength and computational abilities.


Laura: “Stanley, was Dr. Sommer lying to me just then, about Basswood?”


Stanley, the orange-eyed Synth: “Based on initial analysis of 16 distinct physiological factors, I would rate the likelihood of deceit or misrepresentation in Dr. Sommer's response to your inquiry at... EIGHTY-FIVE PERCENT.”

The world would be easier to navigate if we could base our decisions on an abundance of data and well-tuned weighting functions accessible to the human brain. Right? Like a computational model of trust and reputation or a model of how people choose to allocate effort in social interactions. Right?

I'm out of my element here, so this will limit my understanding of these models. Which brings me to a more familiar topic: meta-commentary on interpretation (and extrapolation).

Computational Decision-Making


My motivation for writing this post was annoyance. And despair. A study on probabilistic decision-making under uncertain and volatile conditions came to the conclusion that people with anxiety and depression will benefit from focusing on past successes, instead of failures. Which kinda goes without saying. The paper in eLife was far more measured and sophisticated, but the press release said:

The more chaotic things get, the harder it is for people with clinical anxiety and/or depression to make sound decisions and to learn from their mistakes. On a positive note, overly anxious and depressed people’s judgment can improve if they focus on what they get right, instead of what they get wrong...

...researchers tested the probabilistic decision-making skills of more than 300 adults, including people with major depressive disorder and generalized anxiety disorder. In probabilistic decision making, people, often without being aware of it, use the positive or negative results of their previous actions to inform their current decisions.


The unaware shall become aware. Further advice:

“When everything keeps changing rapidly, and you get a bad outcome from a decision you make, you might fixate on what you did wrong, which is often the case with clinically anxious or depressed people...”

...individualized treatments, such as cognitive behavior therapy, could improve both decision-making skills and confidence by focusing on past successes, instead of failures...

 

The final statement on individualized CBT could very well be true, but it has nothing to do with the outcome of the study (Gagne et al., 2021), wherein participants chose between two shapes associated with differential probabilities of receiving electric shock (Exp. 1), or financial gain or loss (Exp. 2).
 


With that out of the way, I will say the experiments and the computational modeling approach are impressive. The theme is probabilistic decision-making under uncertainty, with the added bonus of volatility in the underlying causal structure (e.g., the square is suddenly associated a higher probability of shocks). People with anxiety disorders and depression are generally intolerant of uncertainty. Learning the stimulus-outcome contingencies and then rapidly adapting to change was predictably impaired.

Does this general finding differ for learning under reward vs. punishment? For anxiety vs. depression? In the past, depression was associated with altered learning under reward, while anxiety was associated with altered learning under punishment (including in the authors' own work). For reasons that were not entirely clear to me, the authors chose to classify symptoms using a bifactor model designed to capture “internalizing psychopathology” common to both anxiety and depression vs. symptoms that are unique to each disorder [ but see Fried (2021) ]1

Overall, high scores on the common internalizing factor were associated with impaired adjustments to learning rate during the volatile condition, and this held whether the outcomes were shocks, financial gains, or financial losses. Meanwhile, high scores on anxiety-unique or depression-unique symptoms did not show this relationship. This was determined by computational modeling of task performance, using a hierarchical Bayesian framework to identify the model that best described the participants' behavior:

We fitted participants’ choice behavior using alternate versions of simple reinforcement learning models. We focused on models that were parameterized in a sufficiently flexible manner to capture differences in behavior between experimental conditions (block type: volatile versus stable; task version: reward gain versus aversive) and differences in learning from better or worse than expected outcomes. We used a hierarchical Bayesian approach to estimate distributions over model parameters at an individual- and population-level with the latter capturing variation as a function of general, anxiety-specific, and depression-specific internalizing symptoms. 


We've been living in a very uncertain world for more than a year now, often in a state of loneliness and isolation. Some of us have experienced loss after loss, deteriorating mental health, lack of motivation, lack of purpose, and difficulty making decisions. My snappish response to the press release concerns whether we can prescribe individualized therapies based on the differences between the yellow arrows on the left (“resilient people”) compared to the right (“internalizing people” — i.e., the anxious and depressed), given that the participants may not even realize they're learning anything.



 Footnote

1 I will leave it to Dr. Eiko Fried (2021) to explain whether we should accept (or reject) this bifactor model of “shared symptoms” vs. “unshared symptoms”.



References

Bell DE. (1985) Disappointment in decision making under uncertainty. Operations research 33(1):1-27.

Gagne C, Zika O, Dayan P, Bishop SJ. (2020). Impaired adaptation of learning to contingency volatility in internalizing psychopathology. Elife 9:e61387.

Further Reading

Fried EI. (2020). Lack of theory building and testing impedes progress in the factor and network literature. Psychological Inquiry 31(4):271-88.

Guest O, Martin AE. (2021) How computational modeling can force theory building in psychological science. Perspect Psychol Sci. Jan 22:1745691620970585.

van Rooij I, Baggio G. (2021). Theory before the test: How to build high-verisimilitude explanatory theories in psychological science. Perspect Psychol Sci. Jan 6:1745691620970604.

Subscribe to Post Comments [Atom]

Sunday, February 28, 2021

Overview of 'The Spike': an epic journey through failure, darkness, meaning, and spontaneity

from Princeton University Press (March 9, 2021)


THE SPIKE is a marvelously unique popular neuroscience book by Professor Mark Humphries, Chair of Computational Neuroscience at the University of Nottingham and Proprietor of The Spike blog on Medium. Humphries' novel approach to brain exposition is built around — well — the spike, the electrical signal neurons use to communicate. In this magical rendition, the 2.1 second journey through the brain takes 174 pages (plus Acknowledgments and Endnotes).

I haven't read the entire book, so this is not a proper book review. But here's an overview of what I might expect. The Introduction is filled with inventive prose like, “We will wander through the splendor of the richly stocked prefrontal cortex and stand in terror before the wall of noise emanating from the basal ganglia.” (p. 10).


Did You Know That Your Life Can Be Reduced To Spikes?

Then there's the splendor and terror of a life reduced to spikes (p. 3):

“All told, your lifespan is about thirty-four billion billion cortical spikes.”


Spike Drama

But will I grow weary of overly dramatic interpretations of spikes? “Our spike's arrival rips open bags of molecules stored at the end of the axon, forcing their contents to be dumped into the gap, and diffuse to the other side.” (p. 29-30).

Waiting for me on the other side of burst vesicles are intriguing chapters on Failure (dead end spikes) and Dark Neurons, the numerous weirdos who remain silent while their neighbors are “screaming at the top of [their] lungs.” (p. 83). I anticipate this story like a good mystery novel with wry throwaway observations (p. 82):

“Neuroimaging—functional MRI—shows us Technicolor images of the cortex, its regions lit up in a swirling riot of poorly chosen colors that make the Pantone people cry into their tasteful coffee mugs.”


Pantone colors of 2021 are gray and yellow

 

Wherever it ends up – with a mind-blowing new vision of the brain based on spontaneous spikes, or with just another opinion on predictive coding theory – I predict THE SPIKE will be an epic and entertaining journey. 

 


Subscribe to Post Comments [Atom]

Friday, January 29, 2021

Thoughts of Blue Brains and GABA Interneurons


An unsuccessful plan to create a computer simulation of a human brain within 10 years. An exhaustive catalog of cell types comprising a specific class of inhibitory neurons within mouse visual cortex. What do these massive research programs have in common? Both efforts were conducted by large multidisciplinary teams at non-traditional research institutions: the Blue Brain Project based in Lausanne, Switzerland and the Allen Institute for Brain Science in Seattle, Washington.

BIG SCIENCE is the wave of the future, and the future is now. Actually, that future started 15-20 years ago. The question should be, is there a future for any other kind of neuroscience?
 

Despite a superficial “BIG SCIENCE” similarity, the differences between funding sources, business models, leadership, operation, and goals of Blue Brain and the Allen Institute are substantial. Henry Markram, the “charismatic but divisive” visionary behind Blue Brain (and the €1 billion Human Brain Project) has been criticized for his “autocratic” leadership, “crap” ideas, and “ill-conceived, ... idiosyncratic approach to brain simulation” in countless articles. His ambition is undeniable, however:

“I realized I could be doing this [eg., standard research on spike-timing-dependent plasticity] for the next 25, 30 years of my career, and it was still not going to help me understand how the brain works.”

 

I'm certainly not a brilliant neuroscientist in Markram's league, but I commented previously on how a quest to discover “how the brain works” might be futile:

...the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).


In his infamous 2009 TED talk, Markram stated that a computer simulation of the human brain was possible in 10 years:
“I hope that you are at least partly convinced that it is not impossible to build a brain. We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.”


This claim would come back to haunt him in 2019, because (of course) he was nowhere close to simulating a human brain. In his defense, Markram said that his critics misunderstood and misinterpreted his grandiose proclamations.1

Blue Brain is now aimed at “biologically detailed digital reconstructions and simulations of the mouse brain.”

In Silico

Documentary filmmaker Noah Hutton2 undertook his own 10 year project that followed Markram and colleagues as they worked towards the goals of Blue Brain. He was motivated by that TED talk and its enthralling prediction of a brain in a supercomputer (hence in silico). Originally entitled Bluebrain and focused on Markram, the documentary evolved over time to include more realistic viewpoints and interviews with skeptical scientists, including Anne Churchland, Terry Sejnowski, Stanislas Dehaene, and Cori Bargmann. Ironically, Sebastian Seung was one of the loudest critics (ironic because Seung has a grandiose TED talk of his own, I Am My Connectome).


 

In Silico was available for streaming during the DOC NYC Festival in November (in the US only), and I had the opportunity to watch it. I was impressed by the motivation and dedication required to complete such a lengthy project.  Hutton had gathered so much footage that he could have made multiple movies from different perspectives.

Over the course of the film, Blue Brain/Human Brain blew up, with ample critiques and a signed petition from hundreds of neuroscientists (see archived Open Letter).

And Hutton grew up. He reflects on the process (and how he changed) at the end of film. He was only 22 at the start, and 10 years is a long time at any age.

Some of the Big Questions in In Silico:

  • How do you make sure all this lovely simulated activity would be relevant for an animal's behavior?
  • How do you build in biological imperfections (noise) or introduce chaos into your perfect pristine computational model? “Tiny mistakes” are critical for adaptable biological systems.

  • “You cannot play the same soccer game again,” said one of the critics (Terry Sejnowski, I think)

  • “What is a generic brain?”
  • What is the vision? 

The timeline kept drifting further and further into the future. It was 10 years in 2009, 10 years in 2012, 10 years in 2013, etc. 

Geneva 2019, and it's Year 10 only two Principals left, 150 papers published, and a model of 10 million neurons in mouse cortex. Stunning visuals, but still disconnected from behavior.

In the end, “What have we learned about the brain? Not much. The model is incomprehensible,” to paraphrase Sejnowski.


GABA Interneurons

Another brilliant and charismatic neuroscientist, Christof Koch, was interviewed by Hutton. “Henry has two personalities. One is a fantastic, sober scientist … the other is a PR-minded messiah.”

Koch is Chief Scientist of the MindScope Program at the Allen Institute for Brain Science, which focuses on how neural circuits produce vision. Another major unit is the Cell Types Program, which (as advertised) focuses on brain cell types and connectivity.3

The Allen Institute core principles are team science, Big Science, and open science. An impressive recent paper by Gouwens and 97 colleagues (2020) is a prime example of all three. Meticulous analyses of structural, physiological, and genetic properties identified 28 “met-types” of GABAergic interneurons that have congruent morphological, electrophysiological, and transcriptomic properties. This was winnowed down from more than 500 morphologies in 4,200 GABA-containing interneurons in mouse visual cortex. With this mind-boggling level of neuronal complexity in one specific class of cells in mouse cortex — along with the impossibility of “mind uploading” — my inclination is to say that we will never (never say never) be able to build a realistic computer simulation of the human brain.


Footnotes 

1 Here's another gem: “There literally are only a handful of equations that you need to simulate the activity of the neocortex.”

2 Most of Hutton's work has been as writer and director of documentary films, but I was excited to see that his first narrative feature, Lapsis, will be available for streaming next month. To accompany his film, he's created an immersive online world of interlinked websites that advertise non-existent employment opportunities, entertainment ventures, diseases, and treatments. It very much reminds me of the realistic yet spoof websites associated with the films Eternal Sunshine of the Spotless Mind (LACUNA, Inc.) and Ex Machina (BlueBook). In fact, I'm so enamored with them that they've appeared in several of my own blog posts.

3 Investigation of cell types is big in the NIH BRAIN Initiative ® as well.



References

Abbott A. (2020). Documentary follows implosion of billion-euro brain project. Nature 588:215-6.

[Alison Abbott covered the Blue Brain/Human Brain sturm und drang for years]

Gouwens NW, Sorensen SA, Baftizadeh F, Budzillo A, Lee BR, Jarsky T, Alfiler L, Baker K, Barkan E, Berry K, Bertagnolli D ... Zeng H et al. (2020). Integrated morphoelectric and transcriptomic classification of cortical GABAergic cells. Cell 83(4):935-53.

Waldrop M. (2012). Computer modelling: Brain in a box. Nature News 482(7386):456.


Further Reading

The Blue Brain Project (01 February 2006), by Dr. Henry Markram

“Alan Turing (1912–1954) started off by wanting to 'build the brain' and ended up with a computer. ... As calculation speeds approach and go beyond the petaFLOPS range, it is becoming feasible to make the next series of quantum leaps to simulating networks of neurons, brain regions and, eventually, the whole brain.”

A brain in a supercomputer (July 2009), Henry Markram's TED talk
“Our mission is to build a detailed, realistic computer model of the human brain. And we've done, in the past four years, a proof of concept on a small part of the rodent brain, and with this proof of concept we are now scaling the project up to reach the human brain.”


Blue Brain Founder Responds to Critics, Clarifies His Goals (11 Feb 2011), Science news

Bluebrain: Noah Hutton's 10-Year Documentary about the Mission to Reverse Engineer the Human Brain (9 Nov 2012), an indispensable interview with Ferris Jabr in Scientific American

European neuroscientists revolt against the E.U.'s Human Brain Project (11 July 2014), Science news

Row hits flagship brain plan (7 July 2014), Nature news

Brain Fog (7 July 2014), Nature editorial

Human Brain Project votes for leadership change (4 March 2015), Nature news

'In Silico:' Director Noah Hutton reveals how one neuroscientist's pursuit of perfection went awry (10 Nov 2020), another indispensable interview, this time with Nadja Sayej in Inverse

“They still haven’t even simulated a whole mouse brain. I realized halfway through the 10-year point that the human brain probably wasn’t going to happen.” ...

In the first few years, I followed only the team. Then, I started talking to critics.

 

 

Subscribe to Post Comments [Atom]

Wednesday, December 30, 2020

How the Brain Works


Every now and then, it's refreshing to remember how little we know about “how the brain works.” I put that phrase in quotes because the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).

First of all, whose brain are we trying to explain? Yours? Mine? The brain of a monkey, mouse, marsupial, monotreme, mosquito, or mollusk? Or C. elegans with its 306 neurons? “Yeah yeah, we get the point,” you say, “stop being so sarcastic and cynical. We're searching for core principles, first principles.”



In response to that tweet, definitions of “core principle” included:

  • Basically: a formal account of why brains encode information and control behaviour in the way that they do.
  • Fundamental theories on the underlying mechanisms of behavior. 
    • [Maybe “first principles” would be better?]
  • Set of rules by which neurons work?

 

Let's return to the problem of explanation. What are we trying to explain? Behavior, of course [a very specific behavior most of the time]: X behavior in your model organism. But we also want to explain thought, memory, perception, emotion, neurological disorders, mental illnesses, etc. Seems daunting now, eh? Can the same core principles account for all these phenomena across species? I'll step out on a limb here and say NO, then snort for asking such an unfair question. Best that your research program is broken down into tiny reductionistic chunks. More manageable that way.

But what counts as an “explanation”? We haven't answered that yet. It depends on your goal and your preferred level of analysis (à la three levels of David Marr):

computation – algorithm – implementation

 

 

Again, what counts as “explanation”? A concise answer was given by Lila Davachi during a talk in 2019, when we all still met in person for conferences:

“Explanations describe (causal) relationships between phenomena at different levels.”


from Dr. Lila Davachi (CNS meeting, 2019)
The Relation Between Psychology and Neuroscience
(see video, also embedded below)



UPDATE April 25, 2021: EXPLANATION IS IMPOSSIBLE, according to Rich, de Haan, Wareham, and van Rooij (2021), because "the inference problem is intractable, or even uncomputable":
"... even if all uncertainty is removed from scientific inference problems, there are further principled barriers to deriving explanations, resulting from the computational complexity of the inference problems."

Did I say this was a “refreshing” exercise? I meant depressing... but I'm usually a pessimist. (This has grown worse as I've gotten older and been in the field longer.)  
 
Are there reasons for optimism?

You can follow the replies here, and additional replies to this question in another thread starting here.

I'd say the Neuromatch movement (instigated by computational neuroscientists Konrad Kording and Dan Goodman) is definitely a reason for optimism!


Further Reading


The Big Ideas in Cognitive Neuroscience, Explained (2017)

... The end goal of a Marr-ian research program is to find explanations, to reach an understanding of brain-behavior relations. This requires a detailed specification of the computational problem (i.e., behavior) to uncover the algorithms. The correlational approach of cognitive neuroscience and even the causal-mechanistic circuit manipulations of optogenetic neuroscience just don't cut it anymore.

An epidemic of "Necessary and Sufficient" neurons (2018)

A miniaturized holy grail of neuroscience is discovering that activation or inhibition of a specific population of neurons (e.g., prefrontal parvalbumin interneurons) or neural circuit (e.g., basolateral amygdala → nucleus accumbens) is “necessary and sufficient” (N&S) to produce a given behavior.

Big Theory, Big Data, and Big Worries in Cognitive Neuroscience (from CNS meeting, 2018)
Dr. Eve Marder ... posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers.  [paraphrased below]:
  • How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
  • If you're looking for optimization (in [biological] neural networks), YOU ARE DELUSIONAL!
  • Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
  • ..so Cognitive Neuroscientists should be VERY WORRIED

 

 


The Neuromatch Revolution (2020)

“A conference made for the whole neuroscience community”

 

An Amicable Discussion About Psychology and Neuroscience (from CNS meeting, 2019)

  • the conceptual basis of cognitive neuroscience shouldn't be correlation
  • but what if the psychological and the biological are categorically dissimilar??

...and more!

The video below is set to begin with Dr. Davachi, but the entire symposium is included.


Subscribe to Post Comments [Atom]

Monday, November 30, 2020

The Neurohumanities: a new interdisciplinary paradigm or just another neuroword?

 


The latest issue of Neuron has published five thematic “NeuroView” papers proposing that neuroscience can augment our understanding of classically brain-free fields like art, literature, and theology. Two of the articles discuss the relatively established pursuits of neuroaesthetics (Iigaya et al., 2020) and neuromorality/moral decision-making (Kelly & O'Connell, 2020). 

Another article outlines the bare bones of an ambitious search for the neural correlates of collective memory, or the “Cultural Engram” (Dudai, 2020):

I consider human cultures as biocultural ‘‘supraorganisms’’ that can store memory as distributed experience-dependent, behaviorally relevant representations over hundreds and thousands of years. Similar to other memory systems, these supraorganisms encode, consolidate, store, modify, and express memory items in the concerted activity of multiple types and tokens of sub-components of the system.  . . .  ...the memory traces are encoded in large distributed assemblies, composed of individual brains, intragenerational and intergenerational interacting brains, and multiple types of artifacts that interact with brains.


The concept of the “Cultural Engram” is not new, but a research program that incorporates an animal model for cultural memory is indeed novel (regardless of its potential validity):

The search for the cultural engram ... must be paired with productive model systems. The human cultural engram is awaiting its supraorganism equivalents of Aplysia, Drosophila, or fear conditioning for it to give away its inner workings.

In other words, a model of human cultural memory in sea slugs and fruit flies.


Hartley and Poepell (2020) discuss “A Neurohumanities Approach to Language, Music, and Emotion’’ which is intriguing to me, since the domains of language, music, and emotion have a long history within the pantheon of human cognitive neuroscience research. However, they aptly summarize the limitations of these established fields:

...one must bear in mind clear limitations: the insights remain by-and-large correlational, not explanatory.  ... we still lack the appropriate ‘‘conceptual resolution’’ to develop in a comprehensive, mechanistic, and explanatory fashion how these domains of rich individual experience are implemented in a brain.

Which leads us to the question that motivates this special collection on the Emerging Partnership for Exploring the Human Experience:

Why the Neurohumanities?

Why, indeed. Why now? This is not a particularly new neuroword. A Google search reveals a number of existing programs and conferences in Neurohumanities. From the late aughts to the mid tens, I questioned the rigor of potentially misguided pursuits such as Neuroetiquette and Neuroculture, Neuro-Gov, Neurobranding, and The Neuroscience of Kitchen Cabinetry.

 

One thing that's exciting and new is...

A 2016 to 2021 Wellcome Trust ISSF Award to Trinity College allows opportunities for Trinity Staff to build a new programme in “Neurohumanities” and Public Engagement and to establish or expand research programmes through new collaborations.

 

In support of this initiative, Carew & Ramaswami (2020) argue that...

...the time is right for a closer partnership between specific domains of neuroscience and their counterparts within the humanities, which we define broadly as all aspects of human society and culture, including, language, literature, philosophy, law, politics, religion, art, history, and social psychology.  ...  In addition to the opportunities such partnerships represent for new creative research, we suggest that neuroscience also has a pressing responsibility to engage with the canvas of human experience and problems of critical importance to today’s society, as well as for communicating with a clear objective voice to diverse audiences across professional, cultural, and national boundaries. 


Of critical importance to US society is the erosion of truth and the promulgation of political misinformation at the highest levels. We can't wait for neuroscientific solutions for this menace to democracy. Or as I said in 2017, Neuroscience Can't Heal a Divided Nation.


Additional Reading

The Humanities Are Ruining Neuroscience

Professor of Literary Neuroimaging

Harry Potter and the Prisoner of Mid-Cingulate Cortex

The use and abuse of the prefix neuro- in the decades of the BRAIN

 

References

Carew TJ, Ramaswami M. (2020). The Neurohumanities: An Emerging Partnership for Exploring the Human Experience. Neuron 108(4):590-3.
 
Dudai Y. (2020). In Search of the Cultural Engram. Neuron 108(4):600-3.
 
Hartley CA, Poeppel D. (2020). Beyond the Stimulus: A Neurohumanities Approach to Language, Music, and Emotion. Neuron 108(4):597-9.
 
Iigaya K, O’Doherty JP, Starr GG. (2020). Progress and promise in neuroaesthetics. Neuron 108(4):594-6.
 
Kelly C, O’Connell R. (2020). Can Neuroscience Change the Way We View Morality? Neuron 108(4):604-7.


Subscribe to Post Comments [Atom]

Friday, October 30, 2020

COVID-19, Predictive Coding, and Terror Management



Pandemics have a way of bringing death into sharper focus in our everyday lives. As of this writing, 1,188,259 people around the world have died from COVID-19, including 234,218 in the United States. In the dark days of April, the death rate was over 20%. Although this has declined dramatically (to 3%), it’s utterly reckless to minimize the risks of coronavirus and flaunt every mitigation strategy endorsed by infectious disease specialists.


He's like an evil Oprah. You're getting COVID. And you're getting COVID!

One might think that contracting and recovering from COVID-19 would be a sobering experience for most people, but not for the Übermensch (Nietzschean 'Superman'... but really, 'Last Man' is more appropriate) who had access to the latest experimental treatments.1 Trump's boastful reaction is exactly how the 'Coronavirus Episode' of the (scripted) White House reality show was written: “I feel better than I did 20 years ago!” and “I'm a perfect physical specimen.”

This dismissive display reinforces the partisan divide on perceptions of the pandemic and the federal response to it. A recent study by Pew Research Center found major differences in how Democrats and Republicans view the severity of COVID-19. Results from the survey (conducted Aug. 31-Sept. 7, 2020) were no surprise. 

 

 

And as we know, Democrats and Republicans exist in alternate universes constructed by non-overlapping media sources (CNN vs. Fox, to oversimplify), which in turn correlates with whether they wear masks, practice social distancing, and avoid crowds. A new paper in Science (Finkel et al., 2020) integrated data from multiple disciplines to examine the partisan political environment in the US. They found that Democratic and Republican voters have become:

“...POLITICALLY SECTARIAN -- fervently committed to a political identity characterized by three properties: (1) othering (opposing partisans are alien to us), (2) aversion (they are dislikable & untrustworthy), and (3) moralization (they are iniquitous).”

The authors concluded that the combination of all three core ingredients is especially toxic. Furthermore:



Perfect! Dread and existential threat to a fervent political identity during a pandemic that reminds us of our own mortality. The Science paper has a sidebar about motivated (or biased) cognition and whether Democrats and Republicans are equally susceptible (many studies), or whether Republicans are more susceptible than Democrats (other studies).2 

 


We seek out information that confirms our views and push away evidence that contradicts our pre-existing beliefs about “the other”.


Death Denial to Avert Existential Crisis

We also push away thoughts of our own demise: death is something that happens to other people, not to me. Awareness of death or mortality salience — pondering the inevitability of your own death, a time when you will no longer exist — triggers anxiety, according the Terror Management Theory (TMT). In response to this threat, humans react in ways to boost their self-esteem and reinforce their own values (and punish outsiders). These cognitive processes are conceptualized as nebulous “defenses” [nebulous to me, at least] that are deployed to minimize terror. Notably, however, experimental manipulation of mortality salience did not affect “worldview defense” in the large-scale Many Labs 4 replication project, which throws cold water on this aspect of TMT.


Predictive Coding and Perceived Risk of COVID-19

An alternative view of how we disassociate ourselves from death awareness is provided by predictive coding theory. This influential framework hypothesizes that the brain is constantly generating and updating its models of the world based on top-down “biases” and bottom-up sensory input (Clark, 2013):

Brains ... are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. 

Prediction errors are minimized by perceptual inference (updating predictions to better match the input) or active inference (sampling the input in a biased fashion to better fit the predictions). A recent paper considered this framework with regard to beliefs generated during the pandemic, and how they're related to health precautions adopted by individuals to mitigate spread of the virus (Bottemanne et al, 2020). This paper was conceptual (not computational), and it was written in French (meaning I had to read it using Google translate). 

In brief, pandemics are massive sources of uncertainty. There was a delay in the perception of risk, followed by unrealistic optimism (“certainly I do not run the risk of becoming infected”) despite the growing accumulation of evidence to the contrary. The reduced perception of risk leads people to flaunt precautionary mandates, even in France (which is currently showing a greater spike in cases than the US). Subsequently, overwhelming media saturation on the daily death toll and the dangers of COVID-19 updates predictions of risk and triggers mortality salience (Bottemanne et al, 2020). 

And in support of TMT, Framing COVID-19 as an Existential Threat Predicts Anxious Arousal and Prejudice towards Chinese People. Every day in the US, the president and his minions call the novel coronavirus “the China virus and other disparaging terms. Is it any wonder that discrimination and violence against Asian-Americans has increased?


If you're American, PLEASE VOTE if you haven't already.


Further Reading

Covid-19 makes us think about our mortality. Our brains aren’t designed for that.

Existential Neuroscience: a field in search of meaning

Neuroexistentialism: A Brain in Search of Meaning

Existential Dread of Absurd Social Psychology Studies

Terror Management Theory

Footnotes

1 The Last Man is the antithesis of the Superman:

An overman [superman] as described by Zarathustra, the main character in Thus Spoke Zarathustra, is the one who is willing to risk all for the sake of enhancement of humanity. In contrary [is] the 'last man' whose sole desire is his own comfort and is incapable of creating anything beyond oneself in any form.

Trump's declaration: “...All I know is I took something, whatever the hell it was. I felt good very quickly . . . I felt like Superman.” Whether his kitchen-sink treatment regimen was a good idea has firmly challenged.

2 There's a large literature on potential cognitive and neural differences between liberals and conservatives, but I won't cover that here. I wrote about many of these studies in the days of yore.


References

Bottemanne H, Morlaàs O, Schmidt L, Fossati P. (2020). Coronavirus: cerveau prédictif et gestion de la terreur [Coronavirus: Predictive brain and terror management]. Encephale 46(3S):S107-S113.

Clark A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences 36(3):181-204.

Finkel EJ et al. (2020). Political sectarianism in America. Science 370:533-536.


“What's going on with this guy?”

 


 

What is the truth underneath the tweet?

  AP Photo


President Trump showed labored breathing during his first appearance on the White House balcony


Regarding his joyride in the black SUV while he was still hospitalized at Walter Reed:

He did not look tough; he looked trapped.

He looked desperate. He looked pathetic. He looked weak — not because he was ill or because he was finally wearing a mask but because instead of doing the hard work of accepting his own vulnerabilities in the face of sickness, he’d propped himself up on the strength and professionalism of Secret Service agents. Instead of focusing on the humbling task of getting better, he was consumed by the desire to simply look good.

 the end.

Subscribe to Post Comments [Atom]

eXTReMe Tracker