Wednesday, December 30, 2020

How the Brain Works


Every now and then, it's refreshing to remember how little we know about “how the brain works.” I put that phrase in quotes because the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).

First of all, whose brain are we trying to explain? Yours? Mine? The brain of a monkey, mouse, marsupial, monotreme, mosquito, or mollusk? Or C. elegans with its 306 neurons? “Yeah yeah, we get the point,” you say, “stop being so sarcastic and cynical. We're searching for core principles, first principles.”



In response to that tweet, definitions of “core principle” included:

  • Basically: a formal account of why brains encode information and control behaviour in the way that they do.
  • Fundamental theories on the underlying mechanisms of behavior. 
    • [Maybe “first principles” would be better?]
  • Set of rules by which neurons work?

 

Let's return to the problem of explanation. What are we trying to explain? Behavior, of course [a very specific behavior most of the time]: X behavior in your model organism. But we also want to explain thought, memory, perception, emotion, neurological disorders, mental illnesses, etc. Seems daunting now, eh? Can the same core principles account for all these phenomena across species? I'll step out on a limb here and say NO, then snort for asking such an unfair question. Best that your research program is broken down into tiny reductionistic chunks. More manageable that way.

But what counts as an “explanation”? We haven't answered that yet. It depends on your goal and your preferred level of analysis (à la three levels of David Marr):

computation – algorithm – implementation

 

 

Again, what counts as “explanation”? A concise answer was given by Lila Davachi during a talk in 2019, when we all still met in person for conferences:

“Explanations describe (causal) relationships between phenomena at different levels.”


from Dr. Lila Davachi (CNS meeting, 2019)
The Relation Between Psychology and Neuroscience
(see video, also embedded below)



Did I say this was a “refreshing” exercise? I meant depressing... but I'm usually a pessimist. (This has grown worse as I've gotten older and been in the field longer.)

 
Are there reasons for optimism?

You can follow the replies here, and additional replies to this question in another thread starting here.

I'd say the Neuromatch movement (instigated by computational neuroscientists Konrad Kording and Dan Goodman) is definitely a reason for optimism!


Further Reading


The Big Ideas in Cognitive Neuroscience, Explained (2017)

... The end goal of a Marr-ian research program is to find explanations, to reach an understanding of brain-behavior relations. This requires a detailed specification of the computational problem (i.e., behavior) to uncover the algorithms. The correlational approach of cognitive neuroscience and even the causal-mechanistic circuit manipulations of optogenetic neuroscience just don't cut it anymore.

An epidemic of "Necessary and Sufficient" neurons (2018)

A miniaturized holy grail of neuroscience is discovering that activation or inhibition of a specific population of neurons (e.g., prefrontal parvalbumin interneurons) or neural circuit (e.g., basolateral amygdala → nucleus accumbens) is “necessary and sufficient” (N&S) to produce a given behavior.

Big Theory, Big Data, and Big Worries in Cognitive Neuroscience (from CNS meeting, 2018)
Dr. Eve Marder ... posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers.  [paraphrased below]:
  • How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
  • If you're looking for optimization (in [biological] neural networks), YOU ARE DELUSIONAL!
  • Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
  • ..so Cognitive Neuroscientists should be VERY WORRIED

 

 


The Neuromatch Revolution (2020)

“A conference made for the whole neuroscience community”

 

An Amicable Discussion About Psychology and Neuroscience (from CNS meeting, 2019)

  • the conceptual basis of cognitive neuroscience shouldn't be correlation
  • but what if the psychological and the biological are categorically dissimilar??

...and more!

The video below is set to begin with Dr. Davachi, but the entire symposium is included.


Subscribe to Post Comments [Atom]

Monday, November 30, 2020

The Neurohumanities: a new interdisciplinary paradigm or just another neuroword?

 


The latest issue of Neuron has published five thematic “NeuroView” papers proposing that neuroscience can augment our understanding of classically brain-free fields like art, literature, and theology. Two of the articles discuss the relatively established pursuits of neuroaesthetics (Iigaya et al., 2020) and neuromorality/moral decision-making (Kelly & O'Connell, 2020). 

Another article outlines the bare bones of an ambitious search for the neural correlates of collective memory, or the “Cultural Engram” (Dudai, 2020):

I consider human cultures as biocultural ‘‘supraorganisms’’ that can store memory as distributed experience-dependent, behaviorally relevant representations over hundreds and thousands of years. Similar to other memory systems, these supraorganisms encode, consolidate, store, modify, and express memory items in the concerted activity of multiple types and tokens of sub-components of the system.  . . .  ...the memory traces are encoded in large distributed assemblies, composed of individual brains, intragenerational and intergenerational interacting brains, and multiple types of artifacts that interact with brains.


The concept of the “Cultural Engram” is not new, but a research program that incorporates an animal model for cultural memory is indeed novel (regardless of its potential validity):

The search for the cultural engram ... must be paired with productive model systems. The human cultural engram is awaiting its supraorganism equivalents of Aplysia, Drosophila, or fear conditioning for it to give away its inner workings.

In other words, a model of human cultural memory in sea slugs and fruit flies.


Hartley and Poepell (2020) discuss “A Neurohumanities Approach to Language, Music, and Emotion’’ which is intriguing to me, since the domains of language, music, and emotion have a long history within the pantheon of human cognitive neuroscience research. However, they aptly summarize the limitations of these established fields:

...one must bear in mind clear limitations: the insights remain by-and-large correlational, not explanatory.  ... we still lack the appropriate ‘‘conceptual resolution’’ to develop in a comprehensive, mechanistic, and explanatory fashion how these domains of rich individual experience are implemented in a brain.

Which leads us to the question that motivates this special collection on the Emerging Partnership for Exploring the Human Experience:

Why the Neurohumanities?

Why, indeed. Why now? This is not a particularly new neuroword. A Google search reveals a number of existing programs and conferences in Neurohumanities. From the late aughts to the mid tens, I questioned the rigor of potentially misguided pursuits such as Neuroetiquette and Neuroculture, Neuro-Gov, Neurobranding, and The Neuroscience of Kitchen Cabinetry.

 

One thing that's exciting and new is...

A 2016 to 2021 Wellcome Trust ISSF Award to Trinity College allows opportunities for Trinity Staff to build a new programme in “Neurohumanities” and Public Engagement and to establish or expand research programmes through new collaborations.

 

In support of this initiative, Carew & Ramaswami (2020) argue that...

...the time is right for a closer partnership between specific domains of neuroscience and their counterparts within the humanities, which we define broadly as all aspects of human society and culture, including, language, literature, philosophy, law, politics, religion, art, history, and social psychology.  ...  In addition to the opportunities such partnerships represent for new creative research, we suggest that neuroscience also has a pressing responsibility to engage with the canvas of human experience and problems of critical importance to today’s society, as well as for communicating with a clear objective voice to diverse audiences across professional, cultural, and national boundaries. 


Of critical importance to US society is the erosion of truth and the promulgation of political misinformation at the highest levels. We can't wait for neuroscientific solutions for this menace to democracy. Or as I said in 2017, Neuroscience Can't Heal a Divided Nation.


Additional Reading

The Humanities Are Ruining Neuroscience

Professor of Literary Neuroimaging

Harry Potter and the Prisoner of Mid-Cingulate Cortex

The use and abuse of the prefix neuro- in the decades of the BRAIN

 

References

Carew TJ, Ramaswami M. (2020). The Neurohumanities: An Emerging Partnership for Exploring the Human Experience. Neuron 108(4):590-3.
 
Dudai Y. (2020). In Search of the Cultural Engram. Neuron 108(4):600-3.
 
Hartley CA, Poeppel D. (2020). Beyond the Stimulus: A Neurohumanities Approach to Language, Music, and Emotion. Neuron 108(4):597-9.
 
Iigaya K, O’Doherty JP, Starr GG. (2020). Progress and promise in neuroaesthetics. Neuron 108(4):594-6.
 
Kelly C, O’Connell R. (2020). Can Neuroscience Change the Way We View Morality? Neuron 108(4):604-7.


Subscribe to Post Comments [Atom]

Friday, October 30, 2020

COVID-19, Predictive Coding, and Terror Management



Pandemics have a way of bringing death into sharper focus in our everyday lives. As of this writing, 1,188,259 people around the world have died from COVID-19, including 234,218 in the United States. In the dark days of April, the death rate was over 20%. Although this has declined dramatically (to 3%), it’s utterly reckless to minimize the risks of coronavirus and flaunt every mitigation strategy endorsed by infectious disease specialists.


He's like an evil Oprah. You're getting COVID. And you're getting COVID!

One might think that contracting and recovering from COVID-19 would be a sobering experience for most people, but not for the Übermensch (Nietzschean 'Superman'... but really, 'Last Man' is more appropriate) who had access to the latest experimental treatments.1 Trump's boastful reaction is exactly how the 'Coronavirus Episode' of the (scripted) White House reality show was written: “I feel better than I did 20 years ago!” and “I'm a perfect physical specimen.”

This dismissive display reinforces the partisan divide on perceptions of the pandemic and the federal response to it. A recent study by Pew Research Center found major differences in how Democrats and Republicans view the severity of COVID-19. Results from the survey (conducted Aug. 31-Sept. 7, 2020) were no surprise. 

 

 

And as we know, Democrats and Republicans exist in alternate universes constructed by non-overlapping media sources (CNN vs. Fox, to oversimplify), which in turn correlates with whether they wear masks, practice social distancing, and avoid crowds. A new paper in Science (Finkel et al., 2020) integrated data from multiple disciplines to examine the partisan political environment in the US. They found that Democratic and Republican voters have become:

“...POLITICALLY SECTARIAN -- fervently committed to a political identity characterized by three properties: (1) othering (opposing partisans are alien to us), (2) aversion (they are dislikable & untrustworthy), and (3) moralization (they are iniquitous).”

The authors concluded that the combination of all three core ingredients is especially toxic. Furthermore:



Perfect! Dread and existential threat to a fervent political identity during a pandemic that reminds us of our own mortality. The Science paper has a sidebar about motivated (or biased) cognition and whether Democrats and Republicans are equally susceptible (many studies), or whether Republicans are more susceptible than Democrats (other studies).2 

 


We seek out information that confirms our views and push away evidence that contradicts our pre-existing beliefs about “the other”.


Death Denial to Avert Existential Crisis

We also push away thoughts of our own demise: death is something that happens to other people, not to me. Awareness of death or mortality salience — pondering the inevitability of your own death, a time when you will no longer exist — triggers anxiety, according the Terror Management Theory (TMT). In response to this threat, humans react in ways to boost their self-esteem and reinforce their own values (and punish outsiders). These cognitive processes are conceptualized as nebulous “defenses” [nebulous to me, at least] that are deployed to minimize terror. Notably, however, experimental manipulation of mortality salience did not affect “worldview defense” in the large-scale Many Labs 4 replication project, which throws cold water on this aspect of TMT.


Predictive Coding and Perceived Risk of COVID-19

An alternative view of how we disassociate ourselves from death awareness is provided by predictive coding theory. This influential framework hypothesizes that the brain is constantly generating and updating its models of the world based on top-down “biases” and bottom-up sensory input (Clark, 2013):

Brains ... are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. 

Prediction errors are minimized by perceptual inference (updating predictions to better match the input) or active inference (sampling the input in a biased fashion to better fit the predictions). A recent paper considered this framework with regard to beliefs generated during the pandemic, and how they're related to health precautions adopted by individuals to mitigate spread of the virus (Bottemanne et al, 2020). This paper was conceptual (not computational), and it was written in French (meaning I had to read it using Google translate). 

In brief, pandemics are massive sources of uncertainty. There was a delay in the perception of risk, followed by unrealistic optimism (“certainly I do not run the risk of becoming infected”) despite the growing accumulation of evidence to the contrary. The reduced perception of risk leads people to flaunt precautionary mandates, even in France (which is currently showing a greater spike in cases than the US). Subsequently, overwhelming media saturation on the daily death toll and the dangers of COVID-19 updates predictions of risk and triggers mortality salience (Bottemanne et al, 2020). 

And in support of TMT, Framing COVID-19 as an Existential Threat Predicts Anxious Arousal and Prejudice towards Chinese People. Every day in the US, the president and his minions call the novel coronavirus “the China virus and other disparaging terms. Is it any wonder that discrimination and violence against Asian-Americans has increased?


If you're American, PLEASE VOTE if you haven't already.


Further Reading

Covid-19 makes us think about our mortality. Our brains aren’t designed for that.

Existential Neuroscience: a field in search of meaning

Neuroexistentialism: A Brain in Search of Meaning

Existential Dread of Absurd Social Psychology Studies

Terror Management Theory

Footnotes

1 The Last Man is the antithesis of the Superman:

An overman [superman] as described by Zarathustra, the main character in Thus Spoke Zarathustra, is the one who is willing to risk all for the sake of enhancement of humanity. In contrary [is] the 'last man' whose sole desire is his own comfort and is incapable of creating anything beyond oneself in any form.

Trump's declaration: “...All I know is I took something, whatever the hell it was. I felt good very quickly . . . I felt like Superman.” Whether his kitchen-sink treatment regimen was a good idea has firmly challenged.

2 There's a large literature on potential cognitive and neural differences between liberals and conservatives, but I won't cover that here. I wrote about many of these studies in the days of yore.


References

Bottemanne H, Morlaàs O, Schmidt L, Fossati P. (2020). Coronavirus: cerveau prédictif et gestion de la terreur [Coronavirus: Predictive brain and terror management]. Encephale 46(3S):S107-S113.

Clark A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences 36(3):181-204.

Finkel EJ et al. (2020). Political sectarianism in America. Science 370:533-536.


“What's going on with this guy?”

 


 

What is the truth underneath the tweet?

  AP Photo


President Trump showed labored breathing during his first appearance on the White House balcony


Regarding his joyride in the black SUV while he was still hospitalized at Walter Reed:

He did not look tough; he looked trapped.

He looked desperate. He looked pathetic. He looked weak — not because he was ill or because he was finally wearing a mask but because instead of doing the hard work of accepting his own vulnerabilities in the face of sickness, he’d propped himself up on the strength and professionalism of Secret Service agents. Instead of focusing on the humbling task of getting better, he was consumed by the desire to simply look good.

 the end.

Subscribe to Post Comments [Atom]

Wednesday, September 30, 2020

Neuralink in a Dozen Pigs


In a far-ranging chat with Kara Swisher, Elon Musk talked about sustainable energy, brain implants, the stupidity of the press, and more. He gave a casual update on the “Three Little Pigs” demo of Neuralink's 1024-channel chip, finally admitting that his lofty goals are in a “very, very primitive stage”:

Elon Musk: You can make people walk again. You could solve extreme depression or anxiety or schizophrenia or seizures. You could give a mother back her memory so she could remember who her kids are, you know. Basically, if you live long enough, you’re going to get dementia of some kind. And you’ll want to have something to help you. [NOTE: here, he didn't acknowledge the potential for advancements in biological treatments for dementia.]

Kara Swisher:  Could it program in empathy? Or other things? Do you imagine that being part of this? [LAUGHTER] Or hey you could—

EM: You could technically program anything. So empathy is probably a good one.

KS: So where are we in doing this?

EM: So where we are right now is we’re still in a very, very primitive stage. Where thus far we’ve had a lot of successful implants in pigs. And we now have a pig that has had an implant that’s working well and it’s been there for over three months. And we now have implanted about a dozen pigs. And the sensors are working well. A large part of a pig brain is about its snout. So you can literally rub the pig on its snout and we can detect exactly where you touch the snout. [NOTE: “Yeah, that's called somatotopic mapping,” said John Hughlings Jackson in 1886.]


Listen to the podcast: Elon Musk: ‘A.I. Doesn’t Need to Hate Us to Destroy Us’ 

In a conversation with Kara Swisher, the billionaire entrepreneur talks space-faring civilization, battery-powered everything and computer chips in your skull.


Bonus!! Musk on Trump:

Kara Swisher: Do you like him? Are you voting for him?

Elon Musk: [SIGHING] I mean, I’m — to be totally frank I’m not — I mean, I think — let’s just see how the debates go. You know?

KS: That’s going to be your thing, the debates?

EM: Well, I think that is probably the thing that will decide things for America.

KS: Why is that?

EM: I think people just want to see if Biden’s got it together.

KS: Mm-hmm. And if he does?

EM: If he does, he probably wins.

 

He hasn't yet tweeted about the disgraceful dumpster fire... 

 




Subscribe to Post Comments [Atom]

Monday, August 31, 2020

The Mundane Spectacle of the Three Little Pigs



“This Neuralink is implanted in the region of the brain that uh where where the snout the snout is located which is actually quite a large part of the pig's brain.” 1

Elon Musk held a press event (product demo) to make grandiose claims about the Neuralink 1024-channel brain implant currently under development by his start-up.

Three pigs were unveiled, all healthy and happy: Joyce (the one without an implant), Dorothy (who formerly had an implant), and Gertrude, the star of the day with her snout boops. The crowd applauded, impressed at this monumental accomplishment. However, recording spike trains from the brains of animals is as old as time. And actually, wireless Implantable Neuroprosthetics in Pigs is so 2011...2


The title of this post in TNW said it best:

I was excited for Neuralink. Then I watched Elon Musk’s stupid demo
“Here’s the one fact you need to know: Neuralink's actual device is less capable than similar medical BCIs already on the market. The big claim to fame here is that Neuralink hopes one day to bring this technology to the masses.”

And really, invasive intracranial technology is likely to obsolete by the time the requisite advances in neural decoding would occur (if ever). As Kording Lab member Ari Benjamin told BBC News:
“Once they have the recordings, Neuralink will need to decode them and will someday hit the barrier that is our lack of basic understanding of how the brain works, no matter how many neurons they record from.

Decoding goals and movement plans is hard when you don't understand the neural code in which those things are communicated.”

Another winner in the snark department was MIT Technology Review, with:

Elon Musk’s Neuralink is neuroscience theater
“...Neuralink has provided no evidence that it can (or has even tried to) treat depression, insomnia, or a dozen other diseases that Musk mentioned in a slide. One difficulty ahead of the company is perfecting microwires that can survive the ‘corrosive’ context of a living brain for a decade. That problem alone could take years to solve.

The primary objective of the streamed demo, instead, was to stir excitement, recruit engineers to the company (which already employs about 100 people), and build the kind of fan base that has cheered on Musk’s other ventures...”

The cult of Musk is indeed cheering, in a rather credulous fashion (e.g., Why Neuralink Will Change Humanity Forever).


Footnotes

1 It's actually correct that the representation of the snout in pig somatosensory cortex occupies a disproportionately large portion of the cortex.

2 Borton et al. (2011) reported on their “complete neural prosthetic developmental system using a wireless sensor as the implant, a pig as the animal model, and a novel data acquisition paradigm for actuator control.” At that time, the system had 'only' 16 channels, but the field as a whole has evolved since then.


ADDENDUM (Sept 1, 2020):
from Neuralink Progress Update, Summer 2020



An implantable device will solve all these problems by correcting aberrant electrical signals. And drive summon your Tesla telepathically too!

  • Save and replay memories!
  • Super-Vision! (ultraviolet or infrared)
  • Use a computer by thought alone!

Subscribe to Post Comments [Atom]

Thursday, July 30, 2020

What Color is Your Mental Parachute?

Aphantasia and Occupational Choice


NOTE: This isn't a real test of visual imagery. Click HERE for the Simple Aphantasia Test, which assesses whether (and how well) you can imagine pictures in your mind's eye.


Do you prefer to learn by studying material that is visual, auditory, verbal (reading/writing), or kinesthetic (“by doing”) in nature? A massive educational industry has promoted the idea of distinct “learning styles” based on preference for one of these four modalities (take the VARK!). This neuromyth has been thoroughly debunked (see this FAQ).

But we humans clearly vary in our cognitive strengths, and this in turn influences our choice of career. This should come as no surprise.

A recent study queried the occupational choices of self-selected populations of people at the extremes of visual imagery abilities: those with Aphantasia (n=993 male/981 female) or Hyperphantasia (n=65 male/132 female). This was assessed by their scores on the Vividness of Visual Imagery Questionnaire (VVIQ). There was also a control group with average scores on the VVIQ, but they were poorly matched on age and education.


Fig. 4 (Zeman et al., 2020). Percentage of participants with aphantasia and hyperphantasia reporting their occupation as being:
1 = Management, 2 = Business and financial; 3 = Computer and mathematical/Life, physical, social science; 4 = Education, training, and library; 5 = Arts, design, entertainment, sports and media; 6 = Healthcare, practitioners and technical.


As expected, people with fantastic visual imagery were more likely to be in arts, design, entertainment, and media, as well as sports (an excellent ability to imagine a pole vault or swing a bat would be very helpful). People with poor to no visual imagery were more likely to choose a scientific or mathematical occupation. These categories are rather broad, however. For instance, “media” includes print media. And artists and photographers with Aphantasia certainly do exist.

The study had a number of limitations, e.g. washing out individual differences and relying on introspection for rating visual imagery ability (as noted by the authors). There are more objective ways to test for imagery, but these involve in-person visits. Although the authors were circumspect in the Discussion, they were a bit splashy in the title of their paper (Phantasia–The Psychological Significance Of Lifelong Visual Imagery Vivdness Extremes). And the condition of “Aphantasia” existed long before it was named and popularized. But these researchers have caught the imagination of the general public, so to speak:
The delineation of these forms of extreme imagery also clarifies a vital distinction between imagery and imagination: people with aphantasia–who include the geneticist Craig Venter, the neurologist Oliver Sacks and the creator of Firefox, Blake Ross–can be richly imaginative, as visualisation is only one element of this more complex capacity to represent, reshape and reconceive things in their absence.

Reference

Zeman A, Milton F, Della Sala S, Dewar M, Frayling T, Gaddum J, Hattersley A, Heuerman-Williamson B, Jones K, MacKisack M, Winlove C. (2020). Phantasia–The Psychological Significance Of Lifelong Visual Imagery Vivdness Extremes. Cortex. 2020 May 4; S0010-9452(20)30140-4.

Subscribe to Post Comments [Atom]

Tuesday, June 30, 2020

Traces of Fear in Aphantasia

When reading a vivid story that describes a shark attack, do you imagine yourself in the ocean, seeing the dorsal fin approach you?
“...sun glints off the waves / suddenly a dark flash / in the distant waves / maybe it was a shadow / you turn to the beach / more people are pointing / they look anxious / looking back out to sea / a large fin / slices the surface / moving closer...”



Or is your “mind's eye” — your visual mental imagery of the evocative scene ⁠— essentially a blank?



early warning: picture of snake below

One's subjective internal life of thinking, perceiving, imagining, and remembering belongs to oneself and nobody else. [Brain scanning is still not a mind reader.] An increasing number of media reports (and scientific studies) have shined a light on this fact: the mental life of one person differs from that of another, sometimes in startling ways. It's always been that way, but now it's out in the open.

The cat is out of the bag.



When reading that sentence, did you have a fleeting mental picture in your mind's eye? Maybe it was clear, maybe it was hazy. Or maybe you saw no visual image at all... if that was the case, you might have a condition known as Aphantasia, the inability to voluntarily generate mental imagery. This is a normal variant of human experience, albeit an uncommon one.

What are the “consequences” of having Aphantasia? You may be more likely to choose a scientific or mathematical occupation, although artists and photographers with Aphantasia certainly exist. Aphantasia is often associated with poor autobiographical memory (diminished ability to recall the past episodes of your life).

Does Aphantasia affect your emotional reactions to ordinary experiences like looking at pictures or reading a story? If visual imagery is important for having an affective response to the shark story, would people with Aphantasia show physiological (bodily) signs of emotion while reading? Wicken and colleagues (2019) asked this question by comparing the skin conductance response (sweaty palms) evoked by reading vs. looking at pictures. This was a pilot study reported in a preprint (not yet peer reviewed).

If visual imagery is necessary for an affective response to evocative stories, then A-Phantasics should have diminished (or absent) skin conductance responses (SCRs) compared to Typical-Phantasics. In contrast, SCRs to unpleasant pictures should not differ between the two groups, because the picture-viewing experience doesn't require imagery. However, it's still possible that imagery-based elaboration (or verbal elaboration, for that matter) could amplify the SCR, especially since each picture was presented for 5 seconds.




For the reading condition, stories were presented as sequential short phrases (to match reading speed across subjects). The control conditions weren't well-matched, unfortunately. This was especially true for Stories, where reading the task instructions served as the neutral comparison condition (instead of reading a neutral story).


Participants

The participants were 24 individuals with intact imagery (based on the Vividness of Visual Imagery Questionnaire and binocular rivalry priming1 scores within the typical range) and 22 self-identified Aphantasics (who were older, on average, than the control participants).2 For the Aphantasia group, seven (out of the original 29) were excluded because their VVIQ or priming scores exceeded the cut-off.


Results

For the Pictures (Perception) condition, the physiological response to Unpleasant vs. Neutral stimuli was not significantly different for the two groups. Incidentally speaking, the skin conductance level (SCL = SCR) was quite variable, as shown in the shaded portion of the graph below.



Adapted from Fig. 1D (Wicken et al., 2019).  Left: Aggregated progressions of baseline-corrected SCL across the duration of the frightening photos sequence (sampled as average across 5 sec time bins). Right: Mean and standard error across time bins.


The Stories were another story... For the Stories (Imagery) condition, the Aphantasic group did not show an elevated SCL for the scary stories, unlike the controls.



Adapted from Fig. 1B (Wicken et al., 2019).


Or as the authors suggested, “[Aphantasia] is associated with a flat-line physiological response to frightening written, but not perceptual scenarios, supporting imagery’s critical role in emotion.”

I'd say “flat-line” is a little judgy, with the semantic implication that the Aphantasics were dead or something.

I'd like to see subjective ratings of emotion (affect and arousal) for the Pictures and Stories, especially since the primary means of identifying people with Aphantasia is based on subjective report. Nonetheless, this is an intriguing finding, with additional evidence forthcoming (or so I imagine)...


Footnotes

1 See: Is there an objective test for Aphantasia? Binocular rivalry priming can be a useful “objective” measure of aphantasia (Keogh & Pearson, 2018), but it's not necessarily diagnostic at an individual level.

2 Mean age = 33.7 yrs for Aphantasia, mean = 23.0 for controls. I don't know why they didn't recruit age-matched controls from the community, other than the convenience of recruiting university students.


Reference

Wicken M, Keogh R, Pearson J. (2019). The critical role of mental imagery in human emotion: insights from Aphantasia. bioRxiv. 2019 Jan 1:726844.




Subscribe to Post Comments [Atom]

eXTReMe Tracker