Friday, January 29, 2021

Thoughts of Blue Brains and GABA Interneurons


An unsuccessful plan to create a computer simulation of a human brain within 10 years. An exhaustive catalog of cell types comprising a specific class of inhibitory neurons within mouse visual cortex. What do these massive research programs have in common? Both efforts were conducted by large multidisciplinary teams at non-traditional research institutions: the Blue Brain Project based in Lausanne, Switzerland and the Allen Institute for Brain Science in Seattle, Washington.

BIG SCIENCE is the wave of the future, and the future is now. Actually, that future started 15-20 years ago. The question should be, is there a future for any other kind of neuroscience?
 

Despite a superficial “BIG SCIENCE” similarity, the differences between funding sources, business models, leadership, operation, and goals of Blue Brain and the Allen Institute are substantial. Henry Markram, the “charismatic but divisive” visionary behind Blue Brain (and the €1 billion Human Brain Project) has been criticized for his “autocratic” leadership, “crap” ideas, and “ill-conceived, ... idiosyncratic approach to brain simulation” in countless articles. His ambition is undeniable, however:

“I realized I could be doing this [eg., standard research on spike-timing-dependent plasticity] for the next 25, 30 years of my career, and it was still not going to help me understand how the brain works.”

 

I'm certainly not a brilliant neuroscientist in Markram's league, but I commented previously on how a quest to discover “how the brain works” might be futile:

...the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).


In his infamous 2009 TED talk, Markram stated that a computer simulation of the human brain was possible in 10 years:
“I hope that you are at least partly convinced that it is not impossible to build a brain. We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.”


This claim would come back to haunt him in 2019, because (of course) he was nowhere close to simulating a human brain. In his defense, Markram said that his critics misunderstood and misinterpreted his grandiose proclamations.1

Blue Brain is now aimed at “biologically detailed digital reconstructions and simulations of the mouse brain.”

In Silico

Documentary filmmaker Noah Hutton2 undertook his own 10 year project that followed Markram and colleagues as they worked towards the goals of Blue Brain. He was motivated by that TED talk and its enthralling prediction of a brain in a supercomputer (hence in silico). Originally entitled Bluebrain and focused on Markram, the documentary evolved over time to include more realistic viewpoints and interviews with skeptical scientists, including Anne Churchland, Terry Sejnowski, Stanislas Dehaene, and Cori Bargmann. Ironically, Sebastian Seung was one of the loudest critics (ironic because Seung has a grandiose TED talk of his own, I Am My Connectome).


 

In Silico was available for streaming during the DOC NYC Festival in November (in the US only), and I had the opportunity to watch it. I was impressed by the motivation and dedication required to complete such a lengthy project.  Hutton had gathered so much footage that he could have made multiple movies from different perspectives.

Over the course of the film, Blue Brain/Human Brain blew up, with ample critiques and a signed petition from hundreds of neuroscientists (see archived Open Letter).

And Hutton grew up. He reflects on the process (and how he changed) at the end of film. He was only 22 at the start, and 10 years is a long time at any age.

Some of the Big Questions in In Silico:

  • How do you make sure all this lovely simulated activity would be relevant for an animal's behavior?
  • How do you build in biological imperfections (noise) or introduce chaos into your perfect pristine computational model? “Tiny mistakes” are critical for adaptable biological systems.

  • “You cannot play the same soccer game again,” said one of the critics (Terry Sejnowski, I think)

  • “What is a generic brain?”
  • What is the vision? 

The timeline kept drifting further and further into the future. It was 10 years in 2009, 10 years in 2012, 10 years in 2013, etc. 

Geneva 2019, and it's Year 10 only two Principals left, 150 papers published, and a model of 10 million neurons in mouse cortex. Stunning visuals, but still disconnected from behavior.

In the end, “What have we learned about the brain? Not much. The model is incomprehensible,” to paraphrase Sejnowski.


GABA Interneurons

Another brilliant and charismatic neuroscientist, Christof Koch, was interviewed by Hutton. “Henry has two personalities. One is a fantastic, sober scientist … the other is a PR-minded messiah.”

Koch is Chief Scientist of the MindScope Program at the Allen Institute for Brain Science, which focuses on how neural circuits produce vision. Another major unit is the Cell Types Program, which (as advertised) focuses on brain cell types and connectivity.3

The Allen Institute core principles are team science, Big Science, and open science. An impressive recent paper by Gouwens and 97 colleagues (2020) is a prime example of all three. Meticulous analyses of structural, physiological, and genetic properties identified 28 “met-types” of GABAergic interneurons that have congruent morphological, electrophysiological, and transcriptomic properties. This was winnowed down from more than 500 morphologies in 4,200 GABA-containing interneurons in mouse visual cortex. With this mind-boggling level of neuronal complexity in one specific class of cells in mouse cortex — along with the impossibility of “mind uploading” — my inclination is to say that we will never (never say never) be able to build a realistic computer simulation of the human brain.


Footnotes 

1 Here's another gem: “There literally are only a handful of equations that you need to simulate the activity of the neocortex.”

2 Most of Hutton's work has been as writer and director of documentary films, but I was excited to see that his first narrative feature, Lapsis, will be available for streaming next month. To accompany his film, he's created an immersive online world of interlinked websites that advertise non-existent employment opportunities, entertainment ventures, diseases, and treatments. It very much reminds me of the realistic yet spoof websites associated with the films Eternal Sunshine of the Spotless Mind (LACUNA, Inc.) and Ex Machina (BlueBook). In fact, I'm so enamored with them that they've appeared in several of my own blog posts.

3 Investigation of cell types is big in the NIH BRAIN Initiative ® as well.



References

Abbott A. (2020). Documentary follows implosion of billion-euro brain project. Nature 588:215-6.

[Alison Abbott covered the Blue Brain/Human Brain sturm und drang for years]

Gouwens NW, Sorensen SA, Baftizadeh F, Budzillo A, Lee BR, Jarsky T, Alfiler L, Baker K, Barkan E, Berry K, Bertagnolli D ... Zeng H et al. (2020). Integrated morphoelectric and transcriptomic classification of cortical GABAergic cells. Cell 83(4):935-53.

Waldrop M. (2012). Computer modelling: Brain in a box. Nature News 482(7386):456.


Further Reading

The Blue Brain Project (01 February 2006), by Dr. Henry Markram

“Alan Turing (1912–1954) started off by wanting to 'build the brain' and ended up with a computer. ... As calculation speeds approach and go beyond the petaFLOPS range, it is becoming feasible to make the next series of quantum leaps to simulating networks of neurons, brain regions and, eventually, the whole brain.”

A brain in a supercomputer (July 2009), Henry Markram's TED talk
“Our mission is to build a detailed, realistic computer model of the human brain. And we've done, in the past four years, a proof of concept on a small part of the rodent brain, and with this proof of concept we are now scaling the project up to reach the human brain.”


Blue Brain Founder Responds to Critics, Clarifies His Goals (11 Feb 2011), Science news

Bluebrain: Noah Hutton's 10-Year Documentary about the Mission to Reverse Engineer the Human Brain (9 Nov 2012), an indispensable interview with Ferris Jabr in Scientific American

European neuroscientists revolt against the E.U.'s Human Brain Project (11 July 2014), Science news

Row hits flagship brain plan (7 July 2014), Nature news

Brain Fog (7 July 2014), Nature editorial

Human Brain Project votes for leadership change (4 March 2015), Nature news

'In Silico:' Director Noah Hutton reveals how one neuroscientist's pursuit of perfection went awry (10 Nov 2020), another indispensable interview, this time with Nadja Sayej in Inverse

“They still haven’t even simulated a whole mouse brain. I realized halfway through the 10-year point that the human brain probably wasn’t going to happen.” ...

In the first few years, I followed only the team. Then, I started talking to critics.

 

 

Subscribe to Post Comments [Atom]

Wednesday, December 30, 2020

How the Brain Works


Every now and then, it's refreshing to remember how little we know about “how the brain works.” I put that phrase in quotes because the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).

First of all, whose brain are we trying to explain? Yours? Mine? The brain of a monkey, mouse, marsupial, monotreme, mosquito, or mollusk? Or C. elegans with its 306 neurons? “Yeah yeah, we get the point,” you say, “stop being so sarcastic and cynical. We're searching for core principles, first principles.”



In response to that tweet, definitions of “core principle” included:

  • Basically: a formal account of why brains encode information and control behaviour in the way that they do.
  • Fundamental theories on the underlying mechanisms of behavior. 
    • [Maybe “first principles” would be better?]
  • Set of rules by which neurons work?

 

Let's return to the problem of explanation. What are we trying to explain? Behavior, of course [a very specific behavior most of the time]: X behavior in your model organism. But we also want to explain thought, memory, perception, emotion, neurological disorders, mental illnesses, etc. Seems daunting now, eh? Can the same core principles account for all these phenomena across species? I'll step out on a limb here and say NO, then snort for asking such an unfair question. Best that your research program is broken down into tiny reductionistic chunks. More manageable that way.

But what counts as an “explanation”? We haven't answered that yet. It depends on your goal and your preferred level of analysis (à la three levels of David Marr):

computation – algorithm – implementation

 

 

Again, what counts as “explanation”? A concise answer was given by Lila Davachi during a talk in 2019, when we all still met in person for conferences:

“Explanations describe (causal) relationships between phenomena at different levels.”


from Dr. Lila Davachi (CNS meeting, 2019)
The Relation Between Psychology and Neuroscience
(see video, also embedded below)



Did I say this was a “refreshing” exercise? I meant depressing... but I'm usually a pessimist. (This has grown worse as I've gotten older and been in the field longer.)

 
Are there reasons for optimism?

You can follow the replies here, and additional replies to this question in another thread starting here.

I'd say the Neuromatch movement (instigated by computational neuroscientists Konrad Kording and Dan Goodman) is definitely a reason for optimism!


Further Reading


The Big Ideas in Cognitive Neuroscience, Explained (2017)

... The end goal of a Marr-ian research program is to find explanations, to reach an understanding of brain-behavior relations. This requires a detailed specification of the computational problem (i.e., behavior) to uncover the algorithms. The correlational approach of cognitive neuroscience and even the causal-mechanistic circuit manipulations of optogenetic neuroscience just don't cut it anymore.

An epidemic of "Necessary and Sufficient" neurons (2018)

A miniaturized holy grail of neuroscience is discovering that activation or inhibition of a specific population of neurons (e.g., prefrontal parvalbumin interneurons) or neural circuit (e.g., basolateral amygdala → nucleus accumbens) is “necessary and sufficient” (N&S) to produce a given behavior.

Big Theory, Big Data, and Big Worries in Cognitive Neuroscience (from CNS meeting, 2018)
Dr. Eve Marder ... posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers.  [paraphrased below]:
  • How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
  • If you're looking for optimization (in [biological] neural networks), YOU ARE DELUSIONAL!
  • Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
  • ..so Cognitive Neuroscientists should be VERY WORRIED

 

 


The Neuromatch Revolution (2020)

“A conference made for the whole neuroscience community”

 

An Amicable Discussion About Psychology and Neuroscience (from CNS meeting, 2019)

  • the conceptual basis of cognitive neuroscience shouldn't be correlation
  • but what if the psychological and the biological are categorically dissimilar??

...and more!

The video below is set to begin with Dr. Davachi, but the entire symposium is included.


Subscribe to Post Comments [Atom]

Monday, November 30, 2020

The Neurohumanities: a new interdisciplinary paradigm or just another neuroword?

 


The latest issue of Neuron has published five thematic “NeuroView” papers proposing that neuroscience can augment our understanding of classically brain-free fields like art, literature, and theology. Two of the articles discuss the relatively established pursuits of neuroaesthetics (Iigaya et al., 2020) and neuromorality/moral decision-making (Kelly & O'Connell, 2020). 

Another article outlines the bare bones of an ambitious search for the neural correlates of collective memory, or the “Cultural Engram” (Dudai, 2020):

I consider human cultures as biocultural ‘‘supraorganisms’’ that can store memory as distributed experience-dependent, behaviorally relevant representations over hundreds and thousands of years. Similar to other memory systems, these supraorganisms encode, consolidate, store, modify, and express memory items in the concerted activity of multiple types and tokens of sub-components of the system.  . . .  ...the memory traces are encoded in large distributed assemblies, composed of individual brains, intragenerational and intergenerational interacting brains, and multiple types of artifacts that interact with brains.


The concept of the “Cultural Engram” is not new, but a research program that incorporates an animal model for cultural memory is indeed novel (regardless of its potential validity):

The search for the cultural engram ... must be paired with productive model systems. The human cultural engram is awaiting its supraorganism equivalents of Aplysia, Drosophila, or fear conditioning for it to give away its inner workings.

In other words, a model of human cultural memory in sea slugs and fruit flies.


Hartley and Poepell (2020) discuss “A Neurohumanities Approach to Language, Music, and Emotion’’ which is intriguing to me, since the domains of language, music, and emotion have a long history within the pantheon of human cognitive neuroscience research. However, they aptly summarize the limitations of these established fields:

...one must bear in mind clear limitations: the insights remain by-and-large correlational, not explanatory.  ... we still lack the appropriate ‘‘conceptual resolution’’ to develop in a comprehensive, mechanistic, and explanatory fashion how these domains of rich individual experience are implemented in a brain.

Which leads us to the question that motivates this special collection on the Emerging Partnership for Exploring the Human Experience:

Why the Neurohumanities?

Why, indeed. Why now? This is not a particularly new neuroword. A Google search reveals a number of existing programs and conferences in Neurohumanities. From the late aughts to the mid tens, I questioned the rigor of potentially misguided pursuits such as Neuroetiquette and Neuroculture, Neuro-Gov, Neurobranding, and The Neuroscience of Kitchen Cabinetry.

 

One thing that's exciting and new is...

A 2016 to 2021 Wellcome Trust ISSF Award to Trinity College allows opportunities for Trinity Staff to build a new programme in “Neurohumanities” and Public Engagement and to establish or expand research programmes through new collaborations.

 

In support of this initiative, Carew & Ramaswami (2020) argue that...

...the time is right for a closer partnership between specific domains of neuroscience and their counterparts within the humanities, which we define broadly as all aspects of human society and culture, including, language, literature, philosophy, law, politics, religion, art, history, and social psychology.  ...  In addition to the opportunities such partnerships represent for new creative research, we suggest that neuroscience also has a pressing responsibility to engage with the canvas of human experience and problems of critical importance to today’s society, as well as for communicating with a clear objective voice to diverse audiences across professional, cultural, and national boundaries. 


Of critical importance to US society is the erosion of truth and the promulgation of political misinformation at the highest levels. We can't wait for neuroscientific solutions for this menace to democracy. Or as I said in 2017, Neuroscience Can't Heal a Divided Nation.


Additional Reading

The Humanities Are Ruining Neuroscience

Professor of Literary Neuroimaging

Harry Potter and the Prisoner of Mid-Cingulate Cortex

The use and abuse of the prefix neuro- in the decades of the BRAIN

 

References

Carew TJ, Ramaswami M. (2020). The Neurohumanities: An Emerging Partnership for Exploring the Human Experience. Neuron 108(4):590-3.
 
Dudai Y. (2020). In Search of the Cultural Engram. Neuron 108(4):600-3.
 
Hartley CA, Poeppel D. (2020). Beyond the Stimulus: A Neurohumanities Approach to Language, Music, and Emotion. Neuron 108(4):597-9.
 
Iigaya K, O’Doherty JP, Starr GG. (2020). Progress and promise in neuroaesthetics. Neuron 108(4):594-6.
 
Kelly C, O’Connell R. (2020). Can Neuroscience Change the Way We View Morality? Neuron 108(4):604-7.


Subscribe to Post Comments [Atom]

Friday, October 30, 2020

COVID-19, Predictive Coding, and Terror Management



Pandemics have a way of bringing death into sharper focus in our everyday lives. As of this writing, 1,188,259 people around the world have died from COVID-19, including 234,218 in the United States. In the dark days of April, the death rate was over 20%. Although this has declined dramatically (to 3%), it’s utterly reckless to minimize the risks of coronavirus and flaunt every mitigation strategy endorsed by infectious disease specialists.


He's like an evil Oprah. You're getting COVID. And you're getting COVID!

One might think that contracting and recovering from COVID-19 would be a sobering experience for most people, but not for the Übermensch (Nietzschean 'Superman'... but really, 'Last Man' is more appropriate) who had access to the latest experimental treatments.1 Trump's boastful reaction is exactly how the 'Coronavirus Episode' of the (scripted) White House reality show was written: “I feel better than I did 20 years ago!” and “I'm a perfect physical specimen.”

This dismissive display reinforces the partisan divide on perceptions of the pandemic and the federal response to it. A recent study by Pew Research Center found major differences in how Democrats and Republicans view the severity of COVID-19. Results from the survey (conducted Aug. 31-Sept. 7, 2020) were no surprise. 

 

 

And as we know, Democrats and Republicans exist in alternate universes constructed by non-overlapping media sources (CNN vs. Fox, to oversimplify), which in turn correlates with whether they wear masks, practice social distancing, and avoid crowds. A new paper in Science (Finkel et al., 2020) integrated data from multiple disciplines to examine the partisan political environment in the US. They found that Democratic and Republican voters have become:

“...POLITICALLY SECTARIAN -- fervently committed to a political identity characterized by three properties: (1) othering (opposing partisans are alien to us), (2) aversion (they are dislikable & untrustworthy), and (3) moralization (they are iniquitous).”

The authors concluded that the combination of all three core ingredients is especially toxic. Furthermore:



Perfect! Dread and existential threat to a fervent political identity during a pandemic that reminds us of our own mortality. The Science paper has a sidebar about motivated (or biased) cognition and whether Democrats and Republicans are equally susceptible (many studies), or whether Republicans are more susceptible than Democrats (other studies).2 

 


We seek out information that confirms our views and push away evidence that contradicts our pre-existing beliefs about “the other”.


Death Denial to Avert Existential Crisis

We also push away thoughts of our own demise: death is something that happens to other people, not to me. Awareness of death or mortality salience — pondering the inevitability of your own death, a time when you will no longer exist — triggers anxiety, according the Terror Management Theory (TMT). In response to this threat, humans react in ways to boost their self-esteem and reinforce their own values (and punish outsiders). These cognitive processes are conceptualized as nebulous “defenses” [nebulous to me, at least] that are deployed to minimize terror. Notably, however, experimental manipulation of mortality salience did not affect “worldview defense” in the large-scale Many Labs 4 replication project, which throws cold water on this aspect of TMT.


Predictive Coding and Perceived Risk of COVID-19

An alternative view of how we disassociate ourselves from death awareness is provided by predictive coding theory. This influential framework hypothesizes that the brain is constantly generating and updating its models of the world based on top-down “biases” and bottom-up sensory input (Clark, 2013):

Brains ... are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. 

Prediction errors are minimized by perceptual inference (updating predictions to better match the input) or active inference (sampling the input in a biased fashion to better fit the predictions). A recent paper considered this framework with regard to beliefs generated during the pandemic, and how they're related to health precautions adopted by individuals to mitigate spread of the virus (Bottemanne et al, 2020). This paper was conceptual (not computational), and it was written in French (meaning I had to read it using Google translate). 

In brief, pandemics are massive sources of uncertainty. There was a delay in the perception of risk, followed by unrealistic optimism (“certainly I do not run the risk of becoming infected”) despite the growing accumulation of evidence to the contrary. The reduced perception of risk leads people to flaunt precautionary mandates, even in France (which is currently showing a greater spike in cases than the US). Subsequently, overwhelming media saturation on the daily death toll and the dangers of COVID-19 updates predictions of risk and triggers mortality salience (Bottemanne et al, 2020). 

And in support of TMT, Framing COVID-19 as an Existential Threat Predicts Anxious Arousal and Prejudice towards Chinese People. Every day in the US, the president and his minions call the novel coronavirus “the China virus and other disparaging terms. Is it any wonder that discrimination and violence against Asian-Americans has increased?


If you're American, PLEASE VOTE if you haven't already.


Further Reading

Covid-19 makes us think about our mortality. Our brains aren’t designed for that.

Existential Neuroscience: a field in search of meaning

Neuroexistentialism: A Brain in Search of Meaning

Existential Dread of Absurd Social Psychology Studies

Terror Management Theory

Footnotes

1 The Last Man is the antithesis of the Superman:

An overman [superman] as described by Zarathustra, the main character in Thus Spoke Zarathustra, is the one who is willing to risk all for the sake of enhancement of humanity. In contrary [is] the 'last man' whose sole desire is his own comfort and is incapable of creating anything beyond oneself in any form.

Trump's declaration: “...All I know is I took something, whatever the hell it was. I felt good very quickly . . . I felt like Superman.” Whether his kitchen-sink treatment regimen was a good idea has firmly challenged.

2 There's a large literature on potential cognitive and neural differences between liberals and conservatives, but I won't cover that here. I wrote about many of these studies in the days of yore.


References

Bottemanne H, Morlaàs O, Schmidt L, Fossati P. (2020). Coronavirus: cerveau prédictif et gestion de la terreur [Coronavirus: Predictive brain and terror management]. Encephale 46(3S):S107-S113.

Clark A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences 36(3):181-204.

Finkel EJ et al. (2020). Political sectarianism in America. Science 370:533-536.


“What's going on with this guy?”

 


 

What is the truth underneath the tweet?

  AP Photo


President Trump showed labored breathing during his first appearance on the White House balcony


Regarding his joyride in the black SUV while he was still hospitalized at Walter Reed:

He did not look tough; he looked trapped.

He looked desperate. He looked pathetic. He looked weak — not because he was ill or because he was finally wearing a mask but because instead of doing the hard work of accepting his own vulnerabilities in the face of sickness, he’d propped himself up on the strength and professionalism of Secret Service agents. Instead of focusing on the humbling task of getting better, he was consumed by the desire to simply look good.

 the end.

Subscribe to Post Comments [Atom]

Wednesday, September 30, 2020

Neuralink in a Dozen Pigs


In a far-ranging chat with Kara Swisher, Elon Musk talked about sustainable energy, brain implants, the stupidity of the press, and more. He gave a casual update on the “Three Little Pigs” demo of Neuralink's 1024-channel chip, finally admitting that his lofty goals are in a “very, very primitive stage”:

Elon Musk: You can make people walk again. You could solve extreme depression or anxiety or schizophrenia or seizures. You could give a mother back her memory so she could remember who her kids are, you know. Basically, if you live long enough, you’re going to get dementia of some kind. And you’ll want to have something to help you. [NOTE: here, he didn't acknowledge the potential for advancements in biological treatments for dementia.]

Kara Swisher:  Could it program in empathy? Or other things? Do you imagine that being part of this? [LAUGHTER] Or hey you could—

EM: You could technically program anything. So empathy is probably a good one.

KS: So where are we in doing this?

EM: So where we are right now is we’re still in a very, very primitive stage. Where thus far we’ve had a lot of successful implants in pigs. And we now have a pig that has had an implant that’s working well and it’s been there for over three months. And we now have implanted about a dozen pigs. And the sensors are working well. A large part of a pig brain is about its snout. So you can literally rub the pig on its snout and we can detect exactly where you touch the snout. [NOTE: “Yeah, that's called somatotopic mapping,” said John Hughlings Jackson in 1886.]


Listen to the podcast: Elon Musk: ‘A.I. Doesn’t Need to Hate Us to Destroy Us’ 

In a conversation with Kara Swisher, the billionaire entrepreneur talks space-faring civilization, battery-powered everything and computer chips in your skull.


Bonus!! Musk on Trump:

Kara Swisher: Do you like him? Are you voting for him?

Elon Musk: [SIGHING] I mean, I’m — to be totally frank I’m not — I mean, I think — let’s just see how the debates go. You know?

KS: That’s going to be your thing, the debates?

EM: Well, I think that is probably the thing that will decide things for America.

KS: Why is that?

EM: I think people just want to see if Biden’s got it together.

KS: Mm-hmm. And if he does?

EM: If he does, he probably wins.

 

He hasn't yet tweeted about the disgraceful dumpster fire... 

 




Subscribe to Post Comments [Atom]

Monday, August 31, 2020

The Mundane Spectacle of the Three Little Pigs



“This Neuralink is implanted in the region of the brain that uh where where the snout the snout is located which is actually quite a large part of the pig's brain.” 1

Elon Musk held a press event (product demo) to make grandiose claims about the Neuralink 1024-channel brain implant currently under development by his start-up.

Three pigs were unveiled, all healthy and happy: Joyce (the one without an implant), Dorothy (who formerly had an implant), and Gertrude, the star of the day with her snout boops. The crowd applauded, impressed at this monumental accomplishment. However, recording spike trains from the brains of animals is as old as time. And actually, wireless Implantable Neuroprosthetics in Pigs is so 2011...2


The title of this post in TNW said it best:

I was excited for Neuralink. Then I watched Elon Musk’s stupid demo
“Here’s the one fact you need to know: Neuralink's actual device is less capable than similar medical BCIs already on the market. The big claim to fame here is that Neuralink hopes one day to bring this technology to the masses.”

And really, invasive intracranial technology is likely to obsolete by the time the requisite advances in neural decoding would occur (if ever). As Kording Lab member Ari Benjamin told BBC News:
“Once they have the recordings, Neuralink will need to decode them and will someday hit the barrier that is our lack of basic understanding of how the brain works, no matter how many neurons they record from.

Decoding goals and movement plans is hard when you don't understand the neural code in which those things are communicated.”

Another winner in the snark department was MIT Technology Review, with:

Elon Musk’s Neuralink is neuroscience theater
“...Neuralink has provided no evidence that it can (or has even tried to) treat depression, insomnia, or a dozen other diseases that Musk mentioned in a slide. One difficulty ahead of the company is perfecting microwires that can survive the ‘corrosive’ context of a living brain for a decade. That problem alone could take years to solve.

The primary objective of the streamed demo, instead, was to stir excitement, recruit engineers to the company (which already employs about 100 people), and build the kind of fan base that has cheered on Musk’s other ventures...”

The cult of Musk is indeed cheering, in a rather credulous fashion (e.g., Why Neuralink Will Change Humanity Forever).


Footnotes

1 It's actually correct that the representation of the snout in pig somatosensory cortex occupies a disproportionately large portion of the cortex.

2 Borton et al. (2011) reported on their “complete neural prosthetic developmental system using a wireless sensor as the implant, a pig as the animal model, and a novel data acquisition paradigm for actuator control.” At that time, the system had 'only' 16 channels, but the field as a whole has evolved since then.


ADDENDUM (Sept 1, 2020):
from Neuralink Progress Update, Summer 2020



An implantable device will solve all these problems by correcting aberrant electrical signals. And drive summon your Tesla telepathically too!

  • Save and replay memories!
  • Super-Vision! (ultraviolet or infrared)
  • Use a computer by thought alone!

Subscribe to Post Comments [Atom]

Thursday, July 30, 2020

What Color is Your Mental Parachute?

Aphantasia and Occupational Choice


NOTE: This isn't a real test of visual imagery. Click HERE for the Simple Aphantasia Test, which assesses whether (and how well) you can imagine pictures in your mind's eye.


Do you prefer to learn by studying material that is visual, auditory, verbal (reading/writing), or kinesthetic (“by doing”) in nature? A massive educational industry has promoted the idea of distinct “learning styles” based on preference for one of these four modalities (take the VARK!). This neuromyth has been thoroughly debunked (see this FAQ).

But we humans clearly vary in our cognitive strengths, and this in turn influences our choice of career. This should come as no surprise.

A recent study queried the occupational choices of self-selected populations of people at the extremes of visual imagery abilities: those with Aphantasia (n=993 male/981 female) or Hyperphantasia (n=65 male/132 female). This was assessed by their scores on the Vividness of Visual Imagery Questionnaire (VVIQ). There was also a control group with average scores on the VVIQ, but they were poorly matched on age and education.


Fig. 4 (Zeman et al., 2020). Percentage of participants with aphantasia and hyperphantasia reporting their occupation as being:
1 = Management, 2 = Business and financial; 3 = Computer and mathematical/Life, physical, social science; 4 = Education, training, and library; 5 = Arts, design, entertainment, sports and media; 6 = Healthcare, practitioners and technical.


As expected, people with fantastic visual imagery were more likely to be in arts, design, entertainment, and media, as well as sports (an excellent ability to imagine a pole vault or swing a bat would be very helpful). People with poor to no visual imagery were more likely to choose a scientific or mathematical occupation. These categories are rather broad, however. For instance, “media” includes print media. And artists and photographers with Aphantasia certainly do exist.

The study had a number of limitations, e.g. washing out individual differences and relying on introspection for rating visual imagery ability (as noted by the authors). There are more objective ways to test for imagery, but these involve in-person visits. Although the authors were circumspect in the Discussion, they were a bit splashy in the title of their paper (Phantasia–The Psychological Significance Of Lifelong Visual Imagery Vivdness Extremes). And the condition of “Aphantasia” existed long before it was named and popularized. But these researchers have caught the imagination of the general public, so to speak:
The delineation of these forms of extreme imagery also clarifies a vital distinction between imagery and imagination: people with aphantasia–who include the geneticist Craig Venter, the neurologist Oliver Sacks and the creator of Firefox, Blake Ross–can be richly imaginative, as visualisation is only one element of this more complex capacity to represent, reshape and reconceive things in their absence.

Reference

Zeman A, Milton F, Della Sala S, Dewar M, Frayling T, Gaddum J, Hattersley A, Heuerman-Williamson B, Jones K, MacKisack M, Winlove C. (2020). Phantasia–The Psychological Significance Of Lifelong Visual Imagery Vivdness Extremes. Cortex. 2020 May 4; S0010-9452(20)30140-4.

Subscribe to Post Comments [Atom]

eXTReMe Tracker