Big Theory vs. Big Data Debate at CNS2018
UPDATE April 9 2018: Video of the entire debate is now available at the CNS blog, YouTube, and the end of this post.
What Will Solve the Big Problems in Cognitive Neuroscience?
That was the question posed in the Special Symposium moderated by David Poeppel at the Boston Sheraton (co-sponsored by the Cognitive Neuroscience Society and the Max-Planck-Society). The format was four talks by prominent experts in (1) the complexity of neural circuits and neuromodulation in invertebrates; (2) computational linguistics and machine learning; (3) human neuroimaging/the next wave in cognitive and computational neuroscience; and (4) language learning/AI contrarianism. These were followed by a lively panel discussion and a Q&A session with the audience. What a great format!
We already knew the general answer before anyone started speaking.
— CNS News (@CogNeuroNews) March 24, 2018
But I believe that Dr. Eve Marder, the first speaker, posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers. Her talk was a treasure trove of quotable witticisms (paraphrased):
- How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
- If you're looking for optimization (in [biological] neural networks), YOU ARE DELUSIONAL!
- Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
- ..so Cognitive Neuroscientists should be VERY WORRIED
Nightmares for Cognitive Neuroscientists: Modulation of a Single Neuron Has State-Dependent Actions on Circuit Dynamics (Gutierrez & Marder) https://t.co/trAHVbIFd4 #CNS2018 pic.twitter.com/nsYHiS5dxv— The Neurocritic (@neurocritic) March 25, 2018
Dr. Marder started her talk by expressing puzzlement about why she would be asked to speak on such a panel, but she gamely agreed. She initially expressed some ideas that almost everyone endorses:
- Good connectivity data is essential
- Simultaneous recordings from many neurons is a good idea [but how many is enough?]
- It's not clear what changes when circuits get big
- Assuming a “return to baseline” is always hiding a change that can be cryptic
- On the optimization issue... nervous systems can't optimize for one situation if it makes them unable to deal with other [unexpected] situations.
- How does degeneracy relieve the tyranny?
Dr. Marder was also a speaker at the Canonical Computation in Brains and Machines meeting in mid-March (h/t @neuroecology), and her talk from that conference is available online.
I believe the talks from the present symposium will be on the CNS YouTube channel as well, and I'll update the post if/when that happens.
Speaking of canonical computation, now I know why Gary Marcus was apoplectic at the thought of “one canonical cortical circuit to rule them all.” More on that in a moment...
The next speaker was Dr. Alona Fyshe, who spoke about computational vision. MLE, MAP, ImageNet, CNNs. I'm afraid I can't enlighten you here. Like everyone else, she thought theory vs. data is a false dichotomy. Her memorable tag line was “Kill Your Darlings.” At first I thought this meant delete your best line [of code? of your paper?], but in reality “our theories need to be flexible enough to adapt to data” (always follow @vukovicnikola #cns2018 for the best real-time conference coverage).
Next up was Dr. Gary Marcus, who started out endorsing the famous Jonas and Kording (2017) paper — Could a Neuroscientist Understand a Microprocessor? — which suggested that current data analysis methods in neuroscience are inadequate for producing a true understanding of the brain. Later, during the discussion, Dr. Jack Gallant quipped that the title of that paper should have been “Neuroscience is Hard” (on Twitter, @KordingLab thought this was unfair). For that matter, Gallant told Marcus, “I think you just don't like the brain.” [Gallant is big on data, but not mindlessly]
image via @vukovicnikola
This sparked a lively debate during the panel discussion and the Q&A.
Anyway, back to Marcus. “Parsimony is a false god,” he said. I've long agreed with this sentiment, especially when it comes to the brain — the simplest explanation isn't always true. Marcus is pessimistic that deep learning will lead to great advances in explaining neural systems (or AI). It's that pesky canonical computation again. The cerebral cortex (and the computations it performs) isn't uniform across regions (Marcus et al., 2014).
This is not a new idea. In my ancient dissertation, I cited Swindale (1990) and said:
Swindale (1990) argues that the idea of mini-columns and macro-columns was drawn on insufficient data. Instead, the diversity of cell types in different cortical areas may result in more varied and complex organization schemes which would adequately reflect the different types of information stored there [updated version would be “types of computations performed there”].1
Finally, Dr. Jack Gallant came out of the gate saying the entire debate is silly, and that we need both theory and data. But he also thinks it's silly that we'll get there with theory alone. We need to build better measurement tools, stop faulty analysis practices, and develop improved experimental paradigms. He clearly favors the collection of more data, but in a refined way. For the moment, collect large rich naturalistic data sets using existing technology.
And remember, kids, “the brain is a horror show of maps.”
image via @vukovicnikola
Big Data AND Big Theory: Everyone Agrees (sorta)
Eve Marder – The Important of the Small for Understanding the Big
Alona Fyshe – Data Driven Everything
Gary Marcus – Neuroscience, Deep Learning, and the Urgent Need for an Enriched Set of Computational Primitives
Jack Gallant – Which Presents the Biggest Obstacle to Advances in Cognitive Neuroscience Today: Lack of Theory or Lack of Data?
Gary Marcus talking over Jack Gallant. Eve Marder is out of the frame.
image by @CogNeuroNews
Footnote
1 Another quote from the young Neurocritic:
As finer analyses are applied to both local circuitry and network properties, our theoretical understanding of neocortical operation may require further revision, if not total replacement with other metaphors. At our current state of knowledge, a number of different conceptual frameworks can be overlaid on the existing data to derive an order that may not be there. Or conversely, the data can be made to fit into one's larger theoretical view.
Great article :)
ReplyDeleteI was at a talk regarding AI a few days ago where the presenter as per usual described our progress with deep learning as taking us very close to creating machines that have human like cognition and he even stated that consciousness is obviously within our reach. This underlying assumption that our current computation model is the model the brain is running under is so easily accepted. Or maybe funding is easier to come by when such promises are being made :P
Great and very funny article!
ReplyDeleteThanks!
ReplyDelete