Friday, December 07, 2012

The Not So Seductive Allure of Colorful Brain Images

We all know that the mere presence of a brain scan image or a neuro-prefix adds instant credibility to any news story, right? And that the public (i.e., undergraduates) is easily swayed into believing in bogus psychological findings if accompanied by pretty colorful brains? Well count me in! But wait...

Neuroscience Fiction Fiction?

The day after the high-profile Neuroscience Fiction article by Dr. Gary Marcus appeared in The New Yorker, a stealthy blog post in Brain Myths summarized an unpublished paper (Farah & Hook, in press, PDF) that  refutes this notion.1

Are Brain Scans Really So Persuasive?
New evidence suggests the allure of brain scans is a myth

Published on December 3, 2012 by Christian Jarrett, Ph.D

A pair of psychologists at The University of Pennsylvania have highlighted a delicious irony. Sceptical neuroscientists and journalists frequently warn about the seductive allure of brain
scan images. Yet the idea that these images are so alluring and persuasive may in fact be a myth. Martha Farah and Cayce Hook refer to this as the “seductive allure of ‘seductive allure’” (PDF via author website).

Most of their evidence against the "seductive allure" is from unpublished data described in their in press article (which we can't evaluate yet):
Two series of as yet unpublished experiments have failed to find evidence for the seductive allure of brain images. Michael, Newman, Vuorre, Cumming, and Garry (2012, under review) reported a series of replication attempts using McCabe & Castel’s Experiment 3 materials. Across nearly 2000 subjects, a meta‐analysis of these studies and McCabe & Castel’s original data produced a miniscule estimated effect size whose plausible range includes a value of zero. Our own work (Hook & Farah, in preparation) has also failed to find evidence that brain images enhance readers’ evaluation of research in three experiments comprising a total of 988 subjects.
However, one published paper did fail to find an effect of fMRI images on how participants judged the scientific reasoning and credibility of a fake news story titled, “Scientists Can Reconstruct Our Dreams” (Gruber & Dickerson, 2012).2  The study was designed to replicate the previous study of McCabe and Castell (2008) with some notable exceptions. Rather than using a bar graph or an ugly and cluttered EEG topographic map as the comparison images in separate groups, Gruber and Dickerson used:
...a fantastical, artistic image of a human head and a cyberspace-esque background with swirly lines. The final group was given an image from the popular science fiction film Minority Report in which three children’s dreams of the future are projected on a screen and used to prevent crime.

Very io9... But both studies did have a no-image condition.

The Gruber and Dickerson study also added additional questions to explicitly assess credibility and authoritativeness, in addition to whether the scientific reasoning made sense. Results showed that in all cases, ratings did not differ statistically across the conditions, including the fMRI vs. no-image comparison.

Hmm... Farah and Hook also debunked the study of Weisberg et al., (2008), which didn't use images at all but added neuroscience-y explanations to 18 actual psychological phenomenon. The problem was that the neuroscience-y paragraphs were longer than the no-neuroscience paragraphs. The author of the excellent but now-defunct Brain In A Vat blog had a similar objection, as explained in I Was a Subject in Deena Weisberg's Study:
So how does it feel being held up to the scientific community as an exemplar idiot? Well, it’s a bit embarrassing. One of my coping mechanisms has been to criticize the experimental design. For instance, I think its problematic that the with neuroscience explanations were longer than the without neuroscience explantions. If subjects merely skimmed some of the questions (not that I would ever do such a thing), they might be more likely to endorse lengthier explanations.

Neuroskeptic also raised this point in his otherwise [mostly] positive evaluation of the study, Critiquing a Classic: "The Seductive Allure of Neuroscience Explanations":
Perhaps the authors should have used three conditions - psychology, "double psychology" (with additional psychological explanations or technical terminology), and neuroscience (with additional neuroscience). As it stands, the authors have strictly shown is that longer, more jargon-filled explanations are rated as better - which is an interesting finding, but is not necessarily specific to neuroscience.

He noted that the authors acknowledged this objection, but also that the conclusions we can draw from the study are fairly modest.

What does this mean for Neuro Doubt and Neuroscience Fiction and Neurobollocks? The takedowns of overreaching interpretations, misleading press releases, and boutique neuro-fields are still valid, of course, but the critics themselves shouldn't succumb to the seductive allure of seductive allure. But we must also remember that the most thorough critiques of seductive allure still await peer review.3

UPDATE 12/13/12:  I had e-mailed to Dr. Deena Weisberg to get her response to the Farah and Hook paper. Here is her reply (quoted with permission):
I have indeed seen the paper that you sent and I think it's a very interesting piece of work. Like many other researchers, I was under the impression that images play some role in making neuroscience appealing, but I would be perfectly happy to be proven wrong about that. I think the case that Hook & Farah make is compelling, although we should reserve final judgment about what exactly is going on until we have more data in hand.

I have a few minor points to add:
First, I stand by the results from my 2008 study, which showed that neuroscience jargon can be inappropriately persuasive in the absence of images. I can't (yet!) claim to know exactly why this is the case, but something about neuroscience information does seem to be unusually alluring. That said, I completely agree with the argument that you and others have made about the with-neuroscience items being longer than the without-neuroscience items, although I would be surprised if length can account for the entirety of the effect. Obviously, more work needs to be done here, and again, I would be happy to be proven wrong.

Second, it's possible that neuroscience images do have some effect on people's judgments, but perhaps the studies that have been done so far just haven't found the right dependent measure. Maybe images don't affect how credible someone thinks a finding is, but do affect how much they want to read a news article that contains that finding or provide funding for the research program, for example.

Third, all of this suggests that it might be even more interesting to study the sociology of this phenomenon --- why do so many people think that neuroscience images are persuasive when they aren't?

Happy to be in touch if you have any further questions, and keep up the good work on the blog!

I'd also like to quote this comment from ‏@JasonZevin, which is relevant for the issue of not quite having the correct dependent measure yet:  "...IMO the effect seems both real, and hard to produce in the lab."


1 And makes me feel a little silly.

2 The experiment must have been designed before these actual 2012 headlines: Scientists read dreams (Nature) and Scientists decode contents of dreams (Telegraph).

3 I wrote to two of the authors of the original studies (Weisberg and Castel) to get their reactions, but haven't heard back. Very, very tragically, we cannot hear from Dr. McCabe (tribute in APS Observer, PDF). In retrospect, my latter inquiry may have been gauche, so I apologize for that.


Farah MJ, Hook CJ (in press). The seductive allure of "seductive allure". Perspectives in Psychological Science. PDF

Gruber, D. & Dickerson, J. (2012). Persuasive images in popular science: Testing judgments of scientific reasoning and credibility. Public Understanding of Science, 21 (8), 938-948 DOI: 10.1177/0963662512454072

McCabe DP, Castel AD. (2008). Seeing is believing: the effect of brain images on judgments of scientific reasoning. Cognition 107:343-52.

Weisberg DS, Keil FC, Goodstein J, Rawson E, Gray JR. (2008). The seductive allure of neuroscience explanations. J Cogn Neurosci. 20:470-7.

Scientists read dreams 
Brain scans during sleep can decode visual content of dreams.

Mo Costandi
19 October 2012

Scientists have learned how to discover what you are dreaming about while you sleep. A team of researchers led by Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan, used functional neuroimaging to scan the brains of three people as they slept, simultaneously recording their brain waves using electroencephalography (EEG).

NOTE: The image from Minority Report was not used in the actual Nature News article...

Subscribe to Post Comments [Atom]


At December 08, 2012 6:01 AM, Blogger Kat Hooper said...

This gives me so many ideas for experiments for my research methods class. Thanks!

At December 09, 2012 1:48 AM, Anonymous Stephan Schleim said...

When I referred to the studies by Weisberg et al., 2008, McCabe & Castel, 2008, but also Keehner, Mayberry & Fischer, 2011, Psychon Bull Rev, I tried to emphasize that the effects were rather small, sometimes present in subgroups only, and did not support the exaggerated claims some inferred from the headlines.

However, when colleagues now fail to replicate the findings in this particular case, this does not mean that the effect was not there several years ago. For example, there has been an increase in critical reports on fMRI in public as well as expert media. If subjects are more likely to learn that fMRI interpretations have to be taken with care now than five years ago, this could diminish the (anyway small) "seductive" effect.

I also remember some research ethics classes, by the way, where talking about unpublished data was considered as problematic, too. Some journals actually threaten to refuse a paper when the finding was already communicated before or during the peer review process.

At December 13, 2012 1:57 PM, Anonymous J Zevin said...

So, to amplify on my tweet, I think it's very hard to measure a construct like the convincingness of an explanation in the lab, especially when you have to do this by asking people to make an explicit judgment. If only there were some neural correlate of being convinced that we could measure!

Seriously, though. One way to test this "in the wild," would be to do a simple A/B marketing test for a cognitive fitness product. One advertisement would claim that, for example, practicing the N-back task has been shown to improve performance on IQ tests. The other would claim that training on the same task enhances activity in brain regions associated with higher IQ scores. Then your DV is just how many click-throughs you get for each ad.

At December 21, 2012 6:28 AM, Anonymous Martha Farah said...

Allow us to chime in! First, with our thanks to Neurocritic and his commentators for this great discussion. Second, with some comments of our own.

We agree: Null results are hard to interpret. There might be a true effect of fMRI on the credibility of scientific reasoning that is just very, very small or fragile, and hence not reliably observed.

We also agree: One cannot base an argument on unpublished data… Our data were mentioned only in passing, along with others’ published and in press data. The focus of our essay was on how little evidence there is for the effect, rather than how much evidence there is against it, as well as on interpretational problems with the small amount of evidence for the effect. (BTW, we have no problems w Weisberg’s study – it’s just not relevant to brain images per se.)

We’ll share our longer empirical paper as soon as it’s ready to circulate. (We asked the editor about the suitability of a 3-experiment paper on this topic for Perspectives in Psychological Science and she said no thanks but invited an essay on the topic, which is how the short paper under discussion came to be.)

So, to the bottom line: Is there no effect of fMRI on judgments of reasoning? A very small or fragile one? Or, is there, in McCabe and Castel’s words, “a particularly powerful persuasive influence on the perceived credibility of cognitive neuroscience data?” As best we can tell, based on considerations laid out in our essay (and elaborated in the paper we are currently writing), the effect is either nonexistent or negligibly small; the third possibility seems extremely unlikely. And when scientists say things like “IMO the effect seems both real, and hard to produce in the lab,” we think it’s time to say “Could be, but what makes you think so?”

-- Martha and Cayce

At December 22, 2012 12:03 PM, Blogger The Neurocritic said...

Dr. Farah - Thanks for taking the time to comment. I'll certainly look forward to reading your paper when it's available. It seems to me that there must be a reason for the glut of "your brain on ____" news reports for a number of years now. Maybe the correct population to study is media content editors!

I think it would be an interesting project to literally look at this search in Google News. Some of the articles do have brain images but a lot of them don't. So it's important to also look at the language used.


Post a Comment

<< Home

eXTReMe Tracker