Are emergent properties really for losers? Why are architectures important? What are “mirror neuron ensembles” anyway? My last post presented an idiosyncratic distillation of the Big Ideas in Cognitive Neuroscience symposium, presented by six speakers at the 2017 CNS meeting. Here I’ll briefly explain what I meant in the bullet points. In some cases I didn't quite understand what the speaker meant so I used outside sources. At the end is a bonus reading list.
The first two speakers made an especially fun pair on the topic of memory: they held opposing views on the “engram”, the physical manifestation of a memory in the brain.1 They also disagreed on most everything else.
1. Charles Randy Gallistel (Rutgers University) – What Memory Must Look Like
Gallistel is convinced that Most Neuroscientists Are Wrong About the Brain. This subtly bizarre essay in Nautilus (which was widely scorned on Twitter) succinctly summarized the major points of his talk. You and I may think the brain-as-computer metaphor has outlived its usefulness, but Gallistel says that “Computation in the brain must resemble computation in a computer.”
- The brain is an information processing device in the sense of Shannon information theory.
- Memories (“engrams”) are not stored at synapses.
- The engram is inter-spike interval.
- Emergent properties are for losers.
2. Tomás Ryan (@TJRyan_77) – Information Storage in Memory Engrams
Ryan began by acknowledging that he had tremendous respect for Gallistal's speech — which was in turn powerful, illuminating, very categorical, polarizing, and rigid. But wrong. Oh so very wrong. Memory is not essentially molecular, we should not approach memory and the brain from a design perspective, and information storage need not mimic a computer.
- The brain does not use Shannon information.
- Memories (“engrams”) are not stored at synapses.
- We learn entirely through spike trains.
- The engram is an emergent property.
Angela Friederici (Max Planck Institute for Human Cognitive and Brain Sciences) – Structure and Dynamics of the Language Network
Following on the heels of the rodent engram crowd, Friederici pointed out the obvious limitations of studying language as a human trait.
- Language is genetically predetermined.
- The “merge” computation is localized in BA 44.
The problem is that acute stroke patients with dysfunctional tissue in left BA 44 do not have impaired syntax. Instead, they have difficulty with phonological short-term memory (keeping strings of digits in mind, like remembering a phone number).
- There is something called mirror neural ensembles.
“This is a poor hypothesis,” she said.
Jean-Rémi King (@jrking0) – Parsing Human Minds
King's expertise is in visual processing (not language), but his talk drew parallels between vision and speech comprehension. A key goal in both domains is to identify the algorithm (sequence of operations) that translates input into meaning.
- Recursion is big.
- Architectures are important.
Each architecture could be compatible with a pattern of brain activity at different time points. But do the classifiers at different time points generalize to other time points? This can be determined by a temporal generalization analysis, which “reveals a repertoire of canonical brain dynamics.”
Danielle Bassett (@DaniSBassett) – A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior
Bassett previewed an arc of exciting ideas where we've shown progress, followed by frustrations and failures, which may ultimately provide an opening for the really Big Ideas. Her focus is on learning from a network perspective, which means patterns of connectivity in the whole brain. What is the underlying network architecture that facilitates the spatial distributed effects?
What is the relationship between these two notions of modularity?
[I ask this as an honest question.]
Major challenges remain, of course.
- Build a bridge from networks to models of behavior.
Incorporate well-specified behavioral models such as reinforcement learning and the drift diffusion model of decision making. These models are fit to the data to derive parameters such as the alpha parameter from reinforcement learning rate. Models of behavior can help generate hypotheses about how the system actually works.
- Use generative models to construct theories.
Network models are extremely useful, but they're not theories. They're descriptors. They don't generate new frameworks for understanding what the data should look like. Theory-building is obviously critical for moving forward.
John Krakauer (@blamlab) – Big Ideas in Cognitive Neuroscience: Action
Krakauer mentioned the Big Questions in Neuroscience symposium at the 2016 SFN meeting, which motivated the CNS symposium as well as a splashy critical paper in Neuron. He raised an interesting point about how the term “connectivity” has different meanings, i.e. the type of embedded connectivity that stores information (engrams) vs. the type of correlational connectivity when modules combine with each other to produce behavior. [BTW, is everyone here using “modules” in the same way?]
- Machine learning will save us.
- Go back to behavioral neuroscience.
OVERALL, there was an emphasis on computational approaches with nods to the three levels of David Marr:
computation – algorithm – implementation
We know from from Krakauer et al. 2017 (and from CNS meetings past and present) that co-organizer David Poeppel is a big fan of Marr. The end goal of a Marr-ian research program is to find explanations, to reach an understanding of brain-behavior relations. This requires a detailed specification of the computational problem (i.e., behavior) to uncover the algorithms. The correlational approach of cognitive neuroscience — and even the causal-mechanistic circuit manipulations of optogenetic neuroscience — just don't cut it anymore.
Footnotes
1 Although neither speaker explicitly defined the term, it is most definitely not the engram as envisioned by Scientology: “a detailed mental image or memory of a traumatic event from the past that occurred when an individual was partially or fully unconscious.” The term was first coined by Richard Semon in 1904.
2 This paper (by Johansson et al, 2014) appeared in PNAS, and Gallistel was the prearranged editor.
3 For instance, here's Mu-ming Poo: “There is now general consensus that persistent modification of the synaptic strength via LTP and LTD of pre-existing connections represents a primary mechanism for the formation of memory engrams.”
4 If you don't understand all this, you're not alone. From Machine Learning: the Basics.
This idea of minimizing some function (in this case, the sum of squared residuals) is a building block of supervised learning algorithms, and in the field of machine learning this function - whatever it may be for the algorithm in question - is referred to as the cost function.
Reading List
Everyone is Wrong
Here's Why Most Neuroscientists Are Wrong About the Brain. Gallistel in Nautilus, Oct. 2015.
Time to rethink the neural mechanisms of learning and memory. Gallistel CR, Balsam PD. Neurobiol Learn Mem. 2014 Feb;108:136-44.
Engrams are Cool
What is memory? The present state of the engram. Poo MM, Pignatelli M, Ryan TJ, Tonegawa S, Bonhoeffer T, Martin KC, Rudenko A, Tsai LH, Tsien RW, Fishell G, Mullins C, Gonçalves JT, Shtrahman M, Johnston ST, Gage FH, Dan Y, Long J, Buzsáki G, Stevens C. BMC Biol. 2016 May 19;14:40.
Engram cells retain memory under retrograde amnesia. Ryan TJ, Roy DS, Pignatelli M, Arons A, Tonegawa S. Science. 2015 May 29;348(6238):1007-13.
Engrams are Overrated
For good measure, some contrarian thoughts floating around Twitter...
“Can We Localize Merge in the Brain? Yes We Can”
Merge in the Human Brain: A Sub-Region Based Functional Investigation in the Left Pars Opercularis. Zaccarella E, Friederici AD. Front Psychol. 2015 Nov 27;6:1818.
The neurobiological nature of syntactic hierarchies. Zaccarella E, Friederici AD. Neurosci Biobehav Rev. 2016 Jul 29. doi: 10.1016/j.neubiorev.2016.07.038.
Really?
Asyntactic comprehension, working memory, and acute ischemia in Broca's area versus angular gyrus. Newhart M, Trupe LA, Gomez Y, Cloutman L, Molitoris JJ, Davis C, Leigh R, Gottesman RF, Race D, Hillis AE. Cortex. 2012 Nov-Dec;48(10):1288-97.
Patients with acute strokes in left BA 44 (part of Broca's area) do not have impaired syntax.
Dynamics of Mental Representations
Characterizing the dynamics of mental representations: the temporal generalization method. King JR, Dehaene S. Trends Cogn Sci. 2014 Apr;18(4):203-10.
King JR, Pescetelli N, Dehaene S. Brain Mechanisms Underlying the Brief Maintenance of Seen and Unseen Sensory Information. Neuron. 2016; 92(5):1122-1134.
A Spate of New Network Articles by Bassett
A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior. Bassett DS, Mattar MG. Trends Cogn Sci. 2017 Apr;21(4):250-264.
This one is most relevant to Dr. Bassett's talk, as it is the title of her talk.
Network neuroscience. Bassett DS, Sporns O. Nat Neurosci. 2017 Feb 23;20(3):353-364.
Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity. Bassett DS, Khambhati AN, Grafton ST. Annu Rev Biomed Eng. 2017 Mar 27. doi: 10.1146/annurev-bioeng-071516-044511.
Modelling And Interpreting Network Dynamics [bioRxiv preprint]. Ankit N Khambhati, Ann E Sizemore, Richard F Betzel, Danielle S Bassett. doi: https://doi.org/10.1101/124016
Behavior is Underrated
Neuroscience Needs Behavior: Correcting a Reductionist Bias. Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, Poeppel D. Neuron. 2017 Feb 8;93(3):480-490.
The first author was a presenter and the last author an organizer of the symposium.
Thanks to @jakublimanowski for the tip on Goldstein (1999).
One major problem with cost functions is the fact that given a behavior or neural pattern, you can always cook up a cost function for which the observed data (or your preferred normative story) constitutes an optimal solution. So unless the cost function idea is further constrained by data, the idea of optimizing cost functions in neuroscience amounts to 'animals try to improve things'. Which no one would actually argue against.
ReplyDeleteOur claim that learning in cerebellar Purkinje cells involves something quite different from LTD/LTP (Johansson et al., PNAS 2014) is hardly overturned by an an appeal to majority opinion or "consensus". The authors of the Poo et al paper (BMC Biology, 2016) did not comment our paper and had probably not read it when they wrote their review. In fact, they do not discuss the cerebellum at all. Our claim has also received further support in a more recent paper (Johansson et al., Cell Reports, 2015). It is also consistent with motor learning in mice which have had their LTD mechanism knocked out (Schonewille et al., Neuron 2011).
ReplyDeleteRelished your reading list headers, especially "Everyone is wrong". The human condition. For a somewhat different take on wrongness, and on the relationship between language and vision, you might glance at this online blog/book:
ReplyDeleterewiring-neuroscience.com
Regards, John Harris