Sunday, May 19, 2019

The Secret Lives of Goats

Goats Galore (May 2019)


If you live in a drought-ridden, wildfire-prone area on the West Coast, you may see herds of goats chomping on dry grass and overgrown brush. This was initially surprising for many who live in urban areas, but it's become commonplace where I live. Announcements appear on local message boards, and families bring their children.


Goats Goats Goats (June 2017)


Goats are glamorous, and super popular on social media now (e.g. Instagram, more Instagram, and Twitter). Over 41 million people have watched Goats Yelling Like Humans - Super Cut Compilation on YouTube. We all know that goats have complex vocalizations, but very few of us know what they mean.





For the health and well-being of livestock, it's advantageous to understand the emotional states conveyed by vocalizations, postures, and other behaviors. A 2015 study measured the acoustic features of different goat calls, along with their associated behavioral and physiological responses. Twenty-two adult goats were put in four situations:
(1) control (neutral)
(2) anticipation of a food reward (positive)
(3) food-related frustration (negative)
(4) social isolation (negative)
Dr. Elodie Briefer and colleagues conducted the study at a goat sanctuary in Kent, UK (Buttercups Sanctuary for Goats). The caprine participants had lived at the sanctuary for at least two years and were fully habituated to humans. Heart rate and respiration were recorded as indicators of arousal, so this dimension of emotion could be considered separately from valence (positive/negative). For conditions #1-3, the goats were tested in pairs (adjacent pens) to avoid the stress of social isolation. They were habituated to the general set-up, to the Frustration and Isolation scenarios, and to the heart rate monitor before the actual experimental sessions, which were run on separate days. Additional details are presented in the first footnote.1





Audio A1. One call produced during a negative situation (food frustration), followed by a call produced during a positive situation (food reward) by the same goat (Briefer et al., 2015).


Behavioral responses during the scenarios were timed and scored; these included tail position, locomotion, rapid head movement, ear orientation, and number of calls. The investigators recorded the calls and produced spectograms that illustrated the frequencies of the vocal signals.



The call on the left (a) was emitted during food frustration (first call in Audio A1). The call on the right (b) was produced during food reward; it has a lower fundamental frequency (F0) and smaller frequency modulations. Modified from Fig. 2 (Briefer et al., 2015).


Both negative and positive food situations resulted in greater goat arousal (measured by heart rate) than the neutral control condition and the low arousal negative condition (social isolation). Behaviorally speaking, arousal and valence had different indicators:
During high arousal situations, goats displayed more head movements, moved more, had their ears pointed forwards more often and to the side less often, and produced more calls. ... In positive situations, as opposed to negative ones, goats had their ears oriented backwards less often and spent more time with the tail up.
Happy goats have their tails up, and do not point their ears backwards. I think I would need a lot more training to identify the range of goat emotions conveyed in my amateur video. At least I know not to stare at them, but next time I should read more about their reactions to human head and body postures.


Do goats show a left or right hemisphere advantage for vocal perception?

Now that the researchers have characterized the valence and arousal communicated by goat calls, another study asked whether goats show a left hemisphere or right hemisphere “preference” for the perception of different calls (Baciadonna et al., 2019). How is this measured, you ask?

Head-Turning in Goats and Babies

The head-turn preference paradigm is widely used in studies of speech perception in infants.

Figure from Prosody cues word order in 7-month-old bilingual infants (Gervain & Werker, 2013).




However, I don't know whether this paradigm is used to assess lateralization of speech perception in babies. In the animal literature, a similar head-orienting response is a standard experimental procedure. For now, we will have to accept the underlying assumption that orienting left or right may be an indicator of a contralateral hemispheric “preference” for that specific vocalization (i.e., orienting to the left side indicates a right hemisphere dominance, and vice versa).
The experimental procedure usually applied to test functional auditory asymmetries in response to vocalizations of conspecifics and heterospecifics is based on a major assumption (Teufel et al. 2007; Siniscalchi et al. 2008). It is assumed that when a sound is perceived simultaneously in both ears, the head orientation to either the left or right side is an indicator of the side of the hemisphere that is primarily involved in the response to the stimulus presented. There is strong evidence that this is the case in humans ... The assumption is also supported by the neuroanatomic evidence of the contralateral connection of the auditory pathways in the mammalian brain (Rogers and Andrew 2002; Ocklenburg et al. 2011).

The experimental set-up to test this in goats is shown below.



A feeding bowl (filled with a tasty mixture of dry pasta and hay) was fixed at the center of the arena opposite to the entrance. The speakers were positioned at a distance of 2 meters from the right and left side of the bowl and were aligned to it. 'X' indicates the position of the Experimenter. Modified from Fig. 2 (Baciadonna et al., 2019).


Four types of vocalizations were played over the speakers: food anticipation, food frustration, isolation, and dog bark (presumably a negative stimulus). Three examples of each vocalization were played, each from a different and unfamiliar goat (or dog).

The various theories of brain lateralization of emotion predicted different results. The right hemisphere model predicts right hemisphere dominance (head turn to the left) for high-arousal emotion regardless of valence (food anticipation, food frustration, dog barks). In contrast, the valence model predicts right hemisphere dominance for processing negative emotions (food frustration, isolation, dog barks), and left hemisphere dominance for positive emotions (food anticipation). The conspecific model predicts left hemisphere dominance for all goat calls (“familiar and non-threatening”) and right hemisphere dominance for dog barks. Finally, a general emotion model predicts right hemisphere dominance for all of the vocalizations, because they're all emotion-laden.

The results sort of supported the conspecific model (according to the authors), if we now accept that dog barks are actually “familiar and non-threatening” [if I understand correctly]. The head-orienting response did not differ significantly between the four vocalizations, and there was a slight bias for head orienting to the right (p=.046 vs. chance level), when collapsed across all stimulus types. 2

The time to resume feeding after hearing a vocalization (a measure of fear) didn't differ between goat calls and dog barks, so the authors concluded that “goats at our study site may have been habituated to dog barks and that they did not perceive dog barks as a serious threat.” However, if a Siberian Husky breaks free of its owner and runs around a fenced-in rent-a-goat herd, chaos may ensue.





Footnotes

1 Methodological details:
“(1) During the control situation, goats were left unmanipulated in a pen with hay (‘Control’). This situation did not elicit any calls, but allowed us to obtain baseline values for physiological and behavioural data. (2) The positive situation was the anticipation of an attractive food reward that the goats had been trained to receive during 3 days of habituation (‘Feeding’). (3) After goats had been tested with the Feeding situation, they were tested with a food frustration situation. This consisted of giving food to only one of the goats in the pair and not to the subject (‘Frustration’). (4) The second negative situation was brief isolation, out of sight from conspecifics behind a hedge. For this situation, goats were tested alone and not in a pair (‘Isolation’).”

2 The replication police will certainly go after such a marginal significance level, but I would like to see them organize a “Many Goats in Many Goat Sanctuaries” replication project.


References

Baciadonna L, Nawroth C, Briefer EF, McElligott AG. (2019). Perceptual lateralization of vocal stimuli in goats. Curr Zool. 65(1):67-74. [PDF]

Briefer EF, Tettamanti F, McElligott AG. (2015). Emotions in goats: mapping physiological, behavioural and vocal profiles. Animal Behaviour 99:131-43. [PDF]


Subscribe to Post Comments [Atom]

Saturday, April 27, 2019

The Paracetamol Papers


I have secretly obtained a large cache of files from Johnson & Johnson, makers of TYLENOL®, the ubiquitous pain relief medication (generic name: acetaminophen in North America, paracetamol elsewhere). The damaging information contained in these documents has been suppressed by the pharmaceutical giant, for reasons that will become obvious in a moment.1

After a massive upload of materials to Wikileaks, it can now be revealed that Tylenol not only...
...but along with the good comes the bad. Acetaminophen (paracetamol) also has ghastly negative effects that tear at the very fabric of society. These OTC tablets...

In a 2018 review of the literature, Ratner and colleagues warned:
“In many ways, the reviewed findings are alarming. Consumers assume that when they take an over-the-counter pain medication, it will relieve their physical symptoms, but they do not anticipate broader psychological effects.”

In the latest installment of this alarmist saga, we learn that acetaminophen blunts positive empathy, i.e. the capacity to appreciate and identify with the positive emotions of others (Mischkowski et al., 2019). I'll discuss those findings another time.

But now, let's evaluate the entire TYLENOL® oeuvre by taking a step back and examining the plausibility of the published claims. To summarize, one of the most common over-the-counter, non-narcotic, non-NSAID pain-relieving medications in existence supposedly alleviates the personal experience of hurt feelings and social pain and heartache (positive outcomes). At the same time, TYLENOL® blunts the phenomenological experiences of positive emotion and diminishes empathy for others' people's experiences, both good and bad (negative outcomes). Published articles have reported that many of these effects can be observed after ONE REGULAR DOSE of paracetamol. These findings are based on how undergraduates judge a series of hypothetical stories. One major problem (which is not specific to The Paracetamol Papers) concerns the ecological validity of laboratory tasks as measures of the cognitive and emotional constructs of interest. This issue is critical, but outside the main scope of our discussion today. More to the point, an experimental manipulation may cause a statistically significant shift in a variable of interest, but ultimately we have to decide whether a circumscribed finding in the lab has broader implications for society at large.


Why TYLENOL® ?

Another puzzling element is, why choose acetaminophen as the exclusive pain medication of interest? Its mechanisms of action for relieving fever, headache, and other pains are unclear. Thus, the authors don't have a specific, principled reason for choosing TYLENOL® over Advil (ibuprofen) or aspirin. Presumably, the effects should generalize, but that doesn't seem to be the case. For instance, ibuprofen actually Increases Social Pain in men.

The analgesic effects of acetaminophen are mediated by a complex series of cellular mechanisms (Mallet et al., 2017). One proposed mechanism involves descending serotonergic bulbospinal pathways from the brainstem to the spinal cord. This isn't exactly Prozac territory, so the analogy between Tylenol and SSRI antidepressants isn't apt. The capsaicin receptor TRPV1 and the Cav3.2 calcium channel might also be part of the action (Mallet et al., 2017). A recently recognized player is the CB1 cannabinoid receptor. AM404, a metabolite of acetaminophen, indirectly activates CB1 by inhibiting the breakdown and reuptake of anandamide, a naturally occurring cannabinoid in the brain (Mallet et al., 2017).



Speaking of cannabinoids, cannabidiol (CBD) the non-intoxicating cousin of THC has a high profile now because of its soaring popularity for many ailments. Ironically, CBD has a very low affinity for CBand CB2 receptors and may act instead via serotonergic 5-HT1A receptors {PDF}, as a modulator of μ- and δ-opioid receptors, and as an antagonist and inverse agonist at several G protein-coupled receptors. Most CBD use seems to be in the non-therapeutic (placebo) range, because the effective dose for, let's say, anxiety is 10-20 times higher than the average commercial product. You'd have to eat 3-6 bags of cranberry gummies for 285-570 mg of CBD (close to the 300-600 mg recommended dose). Unfortunately, you would also ingest 15-30 mg of THC, which would be quite intoxicating.



Words Have Meanings

If acetaminophen were so effective in “mending broken hearts”, “easing heartaches”, and providing a “cure for a broken heart”, we would be a society of perpetually happy automatons, wiping away the suffering of breakup and divorce with a mere OTC tablet. We'd have Tylenol epidemics and Advil epidemics to rival the scourge of the present Opioid Epidemic.

Meanwhile, social and political discourse in the US has reached a new low. Ironically, the paracetamol “blissed-out” population is enraged because they can't identify with the feelings or opinions of the masses who are 'different' than they are. Somehow, I don't think it's from taking too much Tylenol. A large-scale global survey could put that thought to rest for good.




Footnotes

1 This is not true, of course, I was only kidding. All of the information presented here is publicly available in peer-reviewed journal articles and published press reports.

2 except for when it doesn’t – “In contrast, effects on perceived positivity of the described experiences or perceived pleasure in scenario protagonists were not significant” (Mischkowski et al., 2019).

3 Yes, I made this up too. It is entirely fictitious; no one has ever claimed this, to the best of my knowledge.


References

Mallet C, Eschalier A, Daulhac L. Paracetamol: update on its analgesic mechanism of action (2017). Pain relief–From analgesics to alternative therapies.

Mischkowski D, Crocker J, Way BM. (2019). A Social Analgesic? Acetaminophen(Paracetamol) Reduces Positive Empathy. Front Psychol. 10:538.

Subscribe to Post Comments [Atom]

Saturday, April 13, 2019

Does ketamine restore lost synapses? It may, but that doesn't explain its rapid clinical effects


Bravado SPRAVATO™ (esketamine)
© Janssen Pharmaceuticals, Inc. 2019.


Ketamine is the miracle drug that cures depression:
“Recent studies report what is arguably the most important discovery in half a century: the therapeutic agent ketamine that produces rapid (within hours) antidepressant actions in treatment-resistant depressed patients (4, 5). Notably, the rapid antidepressant actions of ketamine are associated with fast induction of synaptogenesis in rodents and reversal of the atrophy caused by chronic stress (6, 7).”

– Duman & Aghajanian (2012). Synaptic Dysfunction in Depression: Potential Therapeutic Targets. Science 338: 68-72.

Beware the risks of ketamine:
“While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option.”

– Sanacora et al. (2017). A Consensus Statement on the Use of Ketamine in the Treatment of Mood Disorders. JAMA Psychiatry 74: 399-405.

Ketamine, dark and light:
Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?

For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another...

– The Neurocritic (2015). On the Long Way Down: The Neurophenomenology of Ketamine

Confused?? You're not alone.


FDA Approval

The animal tranquilizer and club drug ketamine now known as a “miraculous” cure for treatment resistant depression has been approved by the FDA in a nasal spray formulation. No more messy IV infusions at shady clinics.

Here's a key Twitter thread that marks the occasion:


How does it work?

A new paper in Science (Moda-Sava et al., 2019) touts the importance of spine formation and synaptogenesis basically, the remodeling of synapses in microcircuits  in prefrontal cortex, a region important for the top-down control of behavior. Specifically, ketamine and its downstream actions are involved in the creation of new spines on dendrites, and in the formation of new synapses. But it turns out this is NOT linked to the rapid improvement in 'depressive' symptoms observed in a mouse model.



So I think we're still in the dark about why some humans can show immediate (albeit short-lived) relief from their unrelenting depression symptoms after ketamine infusion. Moda-Sava et al. say:
Ketamine’s acute effects on depression-related behavior and circuit function occur rapidly and precede the onset of spine formation, which in turn suggests that spine remodeling may be an activity-dependent adaptation to changes in circuit function (83, 88) and is consistent with theoretical models implicating synaptic homeostasis mechanisms in depression and the stress response (89, 90). Although not required for inducing ketamine’s effects acutely, these newly formed spines are critical for sustaining the antidepressant effect over time.

But the problem is, depressed humans require constant treatment with ketamine to maintain any semblance of an effective clinical response, because the beneficial effect is fleeting. If we accept the possibility that ketamine acts through the mTOR signalling pathway, in the long run detrimental effects on the brain (and non-brain systems) may occur (e.g., bladder damage, various cancers, psychosis, etc).

But let's stay isolated in our silos, with our heads in the sand.


Thanks to @o_ceifero for alerting me to this study.

Further Reading

Ketamine for Depression: Yay or Neigh?

Warning about Ketamine in the American Journal of Psychiatry

Chronic Ketamine for Depression: An Unethical Case Study?

still more on ketamine for depression

Update on Ketamine in Palliative Care Settings

Ketamine - Magic Antidepressant, or Expensive Illusion? - by Neuroskeptic

Fighting Depression with Special K - by Scicurious

On the Long Way Down: The Neurophenomenology of Ketamine


Reference

Moda-Sava RN, Murdock MH, Parekh PK, Fetcho RN, Huang BS, Huynh TN, Witztum J, Shaver DC, Rosenthal DL, Alway EJ, Lopez K, Meng Y, Nellissen L, Grosenick L, Milner TA, Deisseroth K, Bito H, Kasai H, Liston C. (2019). Sustained rescue of prefrontal circuit dysfunction by antidepressant-induced spine formation. Science 364(6436). pii: eaat8078.

Subscribe to Post Comments [Atom]

Sunday, March 31, 2019

An Amicable Discussion About Psychology and Neuroscience


People like conflict (the interpersonal kind, not BLUE).1 Or at least, they like scientific debate at conferences. Panel discussions that are too harmonious seem to be divisive. Some people will say, “well, now THAT wasn't very controversial.” But as I mentioned last time, one highlight of the 2019 Cognitive Neuroscience Society Annual Meeting was a Symposium organized by Dr. David Poeppel.2

Special Session - The Relation Between Psychology and Neuroscience, David Poeppel, Organizer, Grand Ballroom
Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

Conversation. Not debate. So first, let me summarize the conversation. Then I'll get back to the merits (demerits) of debate. In brief, many of the BIG IDEAS motifs of 2017 were revisited...
  • David Marr and the importance of work at all levels of analysis 
  • What are the “laws” that bridge these levels of analysis?
  • Emergent properties” – a unique higher-level entity (e.g., consciousness, a flock of birds) emerges from the activity of lower-level activity (e.g., patterns of neuronal firing, the flight of individual birds)... the sum is greater than its parts
  • Generative Models – formal models that make computational predictions
...with interspersed meta-commentary on replication, publishing, and Advice to Young Neuroscientists. Without further ado:

Dr. David Poeppel – Introductory Remarks that examined the very core of cognitive neuroscience (i.e., “we have to face the music”).
  • the conceptual basis of cognitive neuroscience shouldn't be correlation 
For example, fronto-parietal network connectivity (as determined by resting state fMRI) is associated with some cognitive function, but that doesn't mean it causes or explains the behavior (or internal thought). We all know this, and we all know that “we must want more!” But we haven't the vaguest idea of how to relate complex psychological constructs such as attention, volition, and emotion to ongoing biological processes involving calcium channels, dendrites, and glutamatergic synapses.
  • but what if the psychological and the biological are categorically dissimilar??
In their 2003 book, Philosophical Foundations of Neuroscience, Bennett and Hacker warned that cognitive neuroscientists make the cardinal error of “...commit[ting] the mereological fallacy, the tendency to ascribe to the brain psychological concepts that only make sense when ascribed to whole animals.”
For the characteristic form of explanation in contemporary cognitive neuroscience consists in ascribing psychological attributes to the brain and its parts in order to explain the possession of psychological attributes and the exercise (and deficiencies in the exercise) of cognitive powers by human beings.” (p. 3)

On that optimistic note, the four panelists gave their introductory remarks.

(1) Dr. Lila Davachi asked, “what is the value of the work we do?” Uh, well, that's a difficult question. Are we improving society in some way? Adding to a collective body of knowledge that may (or may not) be the key to explaining behavior and curing disease? Although still difficult, Dr. Davachi posed an easier question, “what are your goals?” To describe behavior, predict behavior (correlation), explain behavior (causation), change behavior (manipulation)? But “what counts as an explanation?” I don't think anyone really answered that question. Instead she mentioned the recurring themes of levels of analysis (without invoking Marr by name), emergent properties (the flock of birds analogy), and bridging laws (that link levels of analysis). The correct level of analysis is/are the one(s) that advance your goals. But what to do about “level chauvinism” in contemporary neuroscience? This question was raised again and again.

(2) Dr. Jennifer Groh jumped right out of the gate with this motif. There are competing narratives in neuroscience we can call the electrode level (recording from neurons) vs. the neuroimaging level (recording large-scale brain activations or “network” interactions based on an indirect measure of neural activity). They make different assumptions about what is significant or worth studying. I found this interesting, since her lab is the only one that records from actual neurons. But there are ever more reductionist scientists who always throw stones at those above them. Neurobiologists (at the electrode level and below) are operating at ever more granular levels of detail, walking away from cognitive neuroscience entirely (who wants to be a dualist, anyway?). I knew exactly where she was going with this: the field is being driven by techniques, doing experiments merely because you can (cough — OPTOGENETICS — cough). Speaking for myself, however, the fact that neurobiologists can control mouse behavior by manipulating highly specific populations of cells raises the specter of insecurity... certain areas of research might not be considered “neuroscience” any more by a bulk of practitioners in the field (just attend the Society for Neuroscience annual meeting).

(3) Dr. Catherine Hartley continued with the recurring theme that we need both prediction and explanation to reach our ultimate goal of understanding behavior. Is a prediction system enough? No, we must know how the black box functions by studying “latent processes” such as representation and computation. But what if we're wrong about representations, I thought? The view of @PsychScientists immediately came to mind. Sorry to interrupt Dr. Hartley, but here's Golonka and Wilson in Ecological Representations:
Mainstream cognitive science and neuroscience both rely heavily on the notion of representation in order to explain the full range of our behavioral repertoire. The relevant feature of representation is its ability to designate (stand in for) spatially or temporally distant properties ... While representational theories are a potentially a powerful foundation for a good cognitive theory, problems such as grounding and system-detectable error remain unsolved. For these and other reasons, ecological explanations reject the need for representations and do not treat the nervous system as doing any mediating work. However, this has left us without a straight-forward vocabulary to engage with so-called 'representation-hungry' problems or the role of the nervous system in cognition.

They go on to invoke James J Gibson's ecological information functions. But I can already hear Dr. Poeppel's colleague @GregoryHickok and others on Twitter debating with @PsychScientists. Oh. Wait. Debate.

Returning to The Conversation that I so rudely interrupted, Dr. Hartley gave some excellent examples of theories that link psychology and neuroscience. The trichromatic theory of color vision — the finding that three independent channels convey color information — was based on psychophysics in the early-mid 1800s (Young–Helmholtz theory). This was over a century before the discovery of cones in the retina, which are sensitive to three different wavelengths. She also mentioned the more frequently used examples of Tolman's cognitive maps (which predated The Hippocampus as a Cognitive Map by 30 years) and error-driven reinforcement learning (Bush–Mosteller [23, 24] and Rescorla–Wagner, both of which predate knowledge of dopamine neurons). To generate good linking hypotheses in the present, we need to construct formal models that make quantitative predictions (generative models).

(4) Dr. Sharon Thompson-Schill gave a brief introduction with no slides, which is good because this post has gotten very long. For this reason, I won't cover the panel discussion and the Q&A period, which continued the same themes outlined above and expanded on “predictivism” (predictive chauvinism and data-driven neuroscience) and raised new points like the value (or not) of introspection in science. When the Cognitive Neuroscience Society updates their YouTube channel, I'll let you know. Another source is the excellent live tweeting of @VukovicNikola. But to wrap up, Dr. Thompson-Schill asked members of the audience whether they consider themselves psychologists or neuroscientists. Most identified as neuroscientists (which is a relative term, I think). Although more people will talk to you on a plane if you say you're a psychologist, “neuroscience is easy, psychology is hard,” a surprising take-home message.


Debating Debates

I've actually wanted to see more debating at the CNS meeting. For instance, the Society for the Neurobiology of Language (SNL) often features a lively debate at their conferences.3 Several examples are listed below.

2016:
Debate: The Consequences of Bilingualism for Cognitive and Neural Function
Ellen Bialystok & Manuel Carreiras

2014:
What counts as neurobiology of language – a debate
Steve Small, Angela Friederici

2013: Panel Discussions
The role of semantic information in reading aloud
Max Coltheart vs Mark Seidenberg

2012: Panel Discussions
What is the role of the insula in speech and language?
Nina F. Dronkers vs Julius Fridriksson


This one-on-one format has been very rare at CNS. Last year we saw a panel of four prominent neuroscientist address/debate...
Big Theory versus Big Data: What Will Solve the Big Problems in Cognitive Neuroscience?


Added-value entertainment was provided by Dr. Gary Marcus, which speaks to the issue of combative personalities dominating the scene.4


Gary Marcus talking over Jack Gallant. Eve Marder is out of the frame.
image by @CogNeuroNews


I'm old enough to remember the most volatile debate in CNS history, which was held (sadly) at the New York Marriott World Trade Center Hotel in 2001. Dr. Nancy Kanwisher and Dr. Isabel Gauthier debated whether face recognition (and activation of the fusiform face area) is a 'special' example of domain specificity (and perhaps an innate ability), or a manifestation of plasticity due to our exceptional expertise at recognizing faces:
A Face-Off on Brain Studies / How we recognize people and objects is a matter of debate
. . .

At the Cognitive Neuroscience Society meeting in Manhattan last week, a panel of scientists on both sides of the debate presented their arguments. On one side is Nancy Kanwisher of MIT, who first proposed that the fusiform gyrus was specifically designed to recognize faces–and faces alone–based on her findings using a magnetic resonance imaging device. Then, Isabel Gauthier, a neuroscientist at Vanderbilt, talked about her research, showing that the fusiform gyrus lights up when looking at many different kinds of objects people are skilled at recognizing.
Kudos to Newsday for keeping this article on their site after all these years.


Footnotes

1 This is the color-word Stroop task: name the font color, rather than read the word. BLUE elicits conflict between the overlearned response ("read the word blue") and the task requirment (say "red").

2 aka the the now-obligatory David Poeppel session on BIG STUFF. See these posts:
3 Let me now get on my soapbox to exhort the conference organizers to keep better online archives  — with stable urls — so I don't have to hunt through archive.org to find links to past meetings.

4 Although this is really tangential, I'm reminded of the Democratic Party presidential contenders in the US. Who deserves more coverage, Beto O'Rourke or Elizabeth Warren? Bernie Sanders or Kamala Harris?

Subscribe to Post Comments [Atom]

Friday, March 22, 2019

#CNS2019



It's March, an odd-numbered year, must mean.... it's time for the Cognitive Neuroscience Society Annual Meeting to be in San Francisco!

I only started looking at the schedule yesterday and noticed the now-obligatory David Poeppel session on BIG stuff 1 on Saturday (March 23, 2019):

Special Session - The Relation Between Psychology and Neuroscience, David Poeppel, Organizer,  Grand Ballroom

Then I clicked on the link and saw a rare occurrence: an all-female slate of speakers!



Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

Also on the schedule tomorrow is the public lecture and keynote address by Matt Walker Why Sleep?
Can you recall the last time you woke up without an alarm clock feeling refreshed, not needing caffeine? If the answer is “no,” you are not alone. Two-thirds of adults fail to obtain the recommended 8 hours of nightly sleep. I doubt you are surprised by the answer to this question, but you may be surprised by the consequences. This talk will describe not only the good things that happen when you get sleep, but the alarmingly bad things that happen when you don’t get enough. The presentation will focus on the brain (learning, memory aging, Alzheimer’s disease, education), but further highlight disease-related consequences in the body (cancer, diabetes, cardiovascular disease). The take-home: sleep is the single most effective thing we can do to reset the health of our brains and bodies.

Why sleep, indeed.

Meanwhile, Foals are playing tonight at The Fox Theater in Oakland. Tickets are still available.




view video on YouTube.


ADDENDUM: The sequel was finally posted on March 31: An Amicable Discussion About Psychology and Neuroscience.


Footnote

1 See these posts:

The Big Ideas in Cognitive Neuroscience, Explained #CNS2017

Big Theory, Big Data, and Big Worries in Cognitive Neuroscience #CNS2018

Subscribe to Post Comments [Atom]

Tuesday, February 19, 2019

Depth Electrodes or Digital Biomarkers? The future of mood monitoring


Mood Monitoring via Invasive Brain Recordings or Smartphone Swipes

Which Would You Choose?


That's not really a fair question. The ultimate goal of invasive recordings is one of direct intervention, by delivering targeted brain stimulation as a treatment. But first you have to establish a firm relationship between neural activity and mood. Well, um, smartphone swipes (the way you interact with your phone) aim to establish a firm relationship between your “digital phenotype” and your mood. And then refer you to an app for a precision intervention. Or to your therapist / psychiatrist, who has to buy into use of the digital phenotyping software.

On the invasive side of the question, DARPA has invested heavily in deep brain stimulation (DBS) as a treatment for many disorders – Post-Traumatic Stress Disorder (PTSD), Major Depression, Borderline Personality Disorder, General Anxiety Disorder, Traumatic Brain Injury, Substance Abuse/Addiction, Fibromyalgia/Chronic Pain, and memory loss. None of the work has led to effective treatments (yet?), but the DARPA research model has established large centers of collaborating scientists who record from the brains of epilepsy patients. And a lot of very impressive papers have emerged – some promising, others not so much.

One recent study (Kirkby et al., 2018) used machine learning to discover brain networks that encode variations in self-reported mood. The metric was coherence between amygdala and hippocampal activity in the β-frequency (13-30 Hz). I can't do justice to their work in the context of this post, but I'll let the authors' graphical abstract speak for itself (and leave questions like, why did it only work in 13 out of 21 of your participants? for later).




Mindstrong

Then along comes a startup tech company called Mindstrong, whose Co-Founder and President is none other than Dr. Thomas Insel, former director of NIMH, and one of the chief architects1 of the Research Domain Criteria (RDoC), “a research framework for new approaches to investigating mental disorders” that eschews the DSM-5 diagnostic bible. The Appendix chronicles the timeline of Dr. Insel's evolution from “mindless” RDoC champion to “brainless” wearables/smartphone tech proselytizer.2


From Wired:
. . .

At Mindstrong, one of the first tests of the [“digital phenotype”] concept will be a study of how 600 people use their mobile phones, attempting to correlate keyboard use patterns with outcomes like depression, psychosis, or mania. “The complication is developing the behavioral features that are actionable and informative,” Insel says. “Looking at speed, looking at latency or keystrokes, looking at error—all of those kinds of things could prove to be interesting.”

Curiously, in their list of digital biomarkers, they differentiate between executive function and cognitive control — although their definitions were overlapping (see my previous post, Is executive function different from cognitive control? The results of an informal poll).
Mindstrong tracks five digital biomarkers associated with brain health: Executive function, cognitive control, working memory, processing speed, and emotional valence. These biomarkers are generated from patterns in smartphone use such as swipes, taps, and other touchscreen activities, and are scientifically validated to provide measurements of cognition and mood.

Whither RDoC?

NIMH established a mandate requiring that all clinical trials should postulate a neural circuit “mechanism” that would be responsible for any efficacious response. Thus, clinical investigators were forced to make up simplistic biological explanations for their psychosocial interventions:

“I hypothesize that the circuit mechanism for my elaborate new psychotherapy protocol which eliminates fear memories (e.g., specific phobias, PTSD) is implemented by down-regulation of amygdala activity while participants view pictures of fearful faces using the Hariri task.”



[a fictitious example]


I'm including a substantial portion of the February 27, 2014 text here because it's important.
NIMH is making three important changes to how we will fund clinical trials.

First, future trials will follow an experimental medicine approach in which interventions serve not only as potential treatments, but as probes to generate information about the mechanisms underlying a disorder. Trial proposals will need to identify a target or mediator; a positive result will require not only that an intervention ameliorated a symptom, but that it had a demonstrable effect on a target, such as a neural pathway implicated in the disorder or a key cognitive operation. While experimental medicine has become an accepted approach for drug development, we believe it is equally important for the development of psychosocial treatments. It offers us a way to understand the mechanisms by which these treatments are leading to clinical change.

OK, so the target could be a key cognitive operation. But let's say your intervention is a Housing First initiative in homeless individuals with severe mental illness and co-morbid substance abuse. Your manipulation is to compare quality of life outcomes for Housing First with Assertive Community Treatment vs. Congregate Housing with on-site supports vs. treatment as usual. What is the key cognitive operation here? Fortunately, this project was funded by the Canadian government and did not need to compete for NIMH funding.

I think my ultimate issue is one of fundamental fairness. Is it OK to skate away from the wreckage and profit by making millions of dollars? From Wired:
“I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs—I think $20 billion—I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness,” Insel says. “I hold myself accountable for that.”

But how? You've admitted to spending $20 billion on cool projects and cool papers and cool scientists who do basic research. This has great value. But the big mistakes were an unrealistic promise of treatments and cures, and the charade of forcing scientists who study C. elegans to explain how they're going to cure psychiatric disorders.


Footnotes

1 Dr. Bruce Cuthbert was especially instrumental, as well as a large panel of experts. But since this post is about digital biomarkers, the former director of NIMH is the focus of RDoC here.

2 The Insel archives of the late Dr. Mickey Nardo in his prolific blog, 1boringoldman.com, are a must-read. I also wish the late Dr. Barney Carroll was still here to issue his trenchant remarks and trademark witticisms.


Reference

Kirkby LA, Luongo FJ, Lee MB, Nahum M, Van Vleet TM, Rao VR, Dawes HE, Chang EF, Sohal VS. (2018). An Amygdala-Hippocampus Subnetwork that Encodes Variation in Human Mood. Cell 175(6):1688-1700.e14.


Additional Reading - Digital Phenotyping

Jain SH, Powers BW, Hawkins JB, Brownstein JS. (2015). The digital phenotype. Nat Biotechnol. 33(5):462-3. [usage of the term here means data mining of content such as Twitter and Google searches, rather than physical interactions with a smartphone]

Insel TR. (2017). Digital Phenotyping: Technology for a New Science of Behavior. JAMA 318(13):1215-1216. [smartphone swipes, NOT content:Who would have believed that patterns of typing and scrolling could reveal individual fingerprints of performance, capturing our neurocognitive function continuously in the real world?”]

Insel TR. (2017). Join the disruptors of health science. Nature 551(7678):23-26. [conversion to the SF Bay Area/Silicon Valley mindset]. Key quote:
“But what struck me most on moving from the Beltway to the Bay Area was that, unlike pharma and biotech, tech companies enter biomedical and health research with a pedigree of software research and development, and a confident, even cocky, spirit of disruption and innovation. They have grown by learning how to move quickly from concept to execution. Software development may generate a minimally viable product within weeks. That product can be refined through ‘dogfooding’ (testing it on a few hundred employees, families or friends) in a month, then released to thousands of users for rapid iterative improvement.”
[is ‘dogfooding’ a real term?? if that's how you're going to test technology designed to help people with severe mental illnesses — without the input of the consumers themselves — YOU WILL BE DOOMED TO FAILURE.]

Philip P, De-Sevin E, Micoulaud-Franchi JA. (2018). Technology as a Tool for Mental Disorders. JAMA 319(5):504.

Insel TR. (2018). Technology as a Tool for Mental Disorders-Reply. JAMA  319(5):504.

Insel TR. (2018). Digital phenotyping: a global tool for psychiatry. World Psychiatry 17(3):276-277.


Appendix - a selective history of RDoC publications























Post-NIMH Transition (articles start appearing less than a month later) 








Subscribe to Post Comments [Atom]

Saturday, February 02, 2019

Is executive function different from cognitive control? The results of an informal poll

It ended in a tie!




Granted, this is a small and biased sample, and I don't have a large number of followers. The answers might have been different had @russpoldrack (Yes in a landslide) or @Neuro_Skeptic (n=12,458 plus 598 wacky write-in votes) posed the question.

Before the poll I facetiously asked:
Other hypothetical questions (that you don't need to answer) might include:
  • Are you a clinical neuropsychologist? 
  • Do you use computational modeling in your work?1
  • What is your age?
Here, I was thinking:
  • Clinical neuropsychologists would say No
  • Computational researchers would say Yes
  • On average, older people would be more likely to say No than younger people

After the poll I asked, “So what ARE the differences between executive function and cognitive control? Or are the terms arbitrary, and their usage a matter of context / subfield?”

No one wanted to expound on the differences between the terms.2
I answered No, because I think the terms are arbitrary, and their usage a matter of context and subfield. Not that Wikipedia is the ultimate authority, but I was amused to see this:

Executive functions

From Wikipedia, the free encyclopedia
  (Redirected from Cognitive control)
Executive functions (collectively referred to as executive function and cognitive control) are a set of cognitive processes that are necessary for the cognitive control of behavior: selecting and successfully monitoring behaviors that facilitate the attainment of chosen goals. Executive functions include basic cognitive processes such as attentional control, cognitive inhibition, inhibitory control, working memory, and cognitive flexibility

Nature said this:

Cognitive control

Cognitive control is the process by which goals or plans influence behaviour. Also called executive control, this process can inhibit automatic responses and influence working memory. Cognitive control supports flexible, adaptive responses and complex goal-directed thought. Some disorders, such as schizophrenia and ADHD, are associated with impairments of executive function.

They're using the terms interchangeably! The terms cognitive control, executive control, executive function, and executive control functions are not well-differentiated, except in specific contexts. For instance, the Carter Lab definition below sounds specific at first, but then branches out to encompass many “executive functions” not named as such.

Cognitive Control

"Cognitive control" is a construct from contemporary cognitive neuroscience that refers to processes that allow information processing and behavior to vary adaptively from moment to moment depending on current goals, rather than remaining rigid and inflexible. Cognitive control processes include a broad class of mental operations including goal or context representation and maintenance, and strategic processes such as attention allocation and stimulus-response mapping. Cognitive control is associated with a wide range of processes and is not restricted to a particular cognitive domain. For example, the presence of impairments in cognitive control functions may be associated with specific deficits in attention, memory, language comprehension and emotional processing. ...

Actually, the term Cognitive Control dates back to the 1920s, if not further. Two quick examples.

(1) When talking about Charles Spearman and his theory of intelligence and his three qualitative principles, Charles S. Slocombe (1928) said:
“To these he adds five quantitative principles, cognitive control (attention), fatigue, retentivity, constancy of output, and primordial potency...”
Simple! Cognitive Control = Attention.

(2) Frederick Anderson (1942), in The Relational Theory of Mind:
“Meanings, then, are mental processes which, although not themselves objects for consciousness, actively modify and characterize that of which we are for the moment conscious. They differ from other subconscious processes in this respect, that we have cognitive control over them and can at any moment bring them to light if we choose.”
Cognitive Control = having the capacity of “bringing things into consciousness” — is this different from attention, or “paying attention” to something by making it the focus of awareness?


Moving into the 21st century, two of the quintessential contemporary cognitive control papers that [mostly] banish executives from their midst are:

Miller and Cohen (2001):
“The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals.”

Botvinick et al. (2001):
“A remarkable feature of the human cognitive system is its ability to configure itself for the performance of specific tasks through appropriate adjustments in perceptual selection, response biasing, and the on-line maintenance of contextual information. The processes behind such adaptability, referred to collectively as cognitive control, have been the focus of a growing research program within cognitive psychology.”

I originally approached this topic during research for a future post on Mindstrong and their “digital phenotyping” technology. Two of their five biomarkers are Executive Function and Cognitive Control. How do they differ? There's an awful lot of overlap, as we'll see in a future post.


Footnotes

1 Another fun (and related) determinant might be, “does your work focus on the dorsal anterior cingulate cortex? In which case, the respondent would answer Yes.

2 except for one deliberately obfuscatory response.


References

Anderson F. (1942). The Relational Theory of Mind. The Journal of Philosophy 39(10):253-60.

Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. (2001). Conflict monitoring and cognitive control. Psychol Rev. 108(3):624-52.

Miller EK, Cohen JD. (2001). An integrative theory of prefrontal cortex function. Annual Rev Neurosci. 2001;24:167-202.

Slocombe CS. (1928). Of mental testing—a pragmatic theory. Journal of Educational Psychology 19(1):1-24.


Appendix

Many, many articles use the terms interchangeably. I won't single out anyone in particular. Instead, here is a valiant attempt by Nigg (2017) to make a slight differentiation between them in a review paper entitled:
On the relations among self-regulation, self-control, executive functioning, effortful control, cognitive control, impulsivity, risk-taking, and inhibition for developmental psychopathology.
But in the end he concludes, “Executive functioning, effortful control, and cognitive control are closely related.”

Subscribe to Post Comments [Atom]

eXTReMe Tracker