Scan Scandal Hits Social Neuroscience
Mind Hacks uncovers a pre-print (PDF) by Vul, Harris, Winkielman, and Pashler entitled "Voodoo Correlations in Social Neuroscience". It's a "bombshell of a paper" that questions the implausibly high correlations observed in some fMRI studies in the field of Social Neuroscience. Vul et al. surveyed the authors of 54 papers to determine the analytic methods used. All but three of the authors responded to the survey, and 54% admitted to using faulty methods to obtain their results:
More than half acknowledged using a strategy that computes separate correlations for individual voxels, and reports means of just the subset of voxels exceeding chosen thresholds. We show how this non-independent analysis grossly inflates correlations, while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample. In addition, we argue that other analysis problems likely created entirely spurious correlations in some cases.A few of The Neurocritic's targets were on the hit list, so stay tuned.... there's more to come in 2009.
Subscribe to Post Comments [Atom]
19 Comments:
A bombshell indeed but if you look at their colourful little chart on page 14, the median correlation coefficient of the "good" papers is still 0.6, and none was below 0.25 - which is still surprisingly high.
To be honest in a field I think you have to take the first report of a given correlation as a post hoc chance finding whatever it is, and only start to believe in independently replicated findings. There are a couple if I recall...
Yes, I saw their colorful little chart on page 14. The problem was with the "bad" papers, not the "good" ones (which comprised less than half the literature in the field, unfortunately).
I agree that replication is a good idea, but tell that to the editors of Science, Nature, and Neuron.
I'm trianed in statistical analysis. Page 14 is a bombshell. If the evidence they present is correct, the whole fMRI deal is in question.
The behavior of science journals in general has not been so good lately-
http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0020124&ct=1
Why Most Published Research Findings Are False
John P. A. Ioannidis
“There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research findings are false.”
Thank-you for this post...
sonic
There's much more wrong with fMRI than page 14 of that article, really. That chart seems pretty innocuous in itself (as I said, it makes me suspicious of publication bias, but that's a problem with half of science nowadays).
A - This article doesn't call the whole of fMRI into question, just one method of relating brain activity to behavior.
B - This article is part and parcel of how science advances. As we go we find out that we have been doing things wrong, then we correct our methods and march forward. The Vul et al. paper will prompt reviewers to scrutinize correlations more closely, which is something that has been needed in the literature for some time - quality is about to go up.
In fairness, I am an fMRI researcher. We are discussing the paper in my lab and we just checked all of our recent articles to see if we have slipped up (we were clean). There are enough nutjobs doing crappy work with fMRI to give it a bad name [like fMRI lie detection], but on the whole it is an amazing technique that remains our best method of observing the human brain in vivo.
You're right, Prefrontal, it doesn't call all of fMRI into question. [But then we see announcements in J. Neurosci. like "BOLD Signals Do Not Always Reflect Neural Activity."] The other 46% of the papers were not guilty of the non-independence error. But I think it's notable that so many of the offending papers were in high-profile journals.
Glad to hear that your lab's papers are all "clean"!
Neurocritic,
I definitely agree that the majority of 'sensational' work that comes out of fMRI research is suspect. When you have folks scanning Super Bowl commercials, lie detection, mind reading, Coke vs Pepsi, and political preferences just to get a press release there is going to be a lot of frustration, both in the scientific community and in the public.
Still, rightly done it is a powerful tool. I would also argue that even if 20% of what is produced is total crap then it means that 80% is significantly contributing to our scientific knowledge. Truth accrues and, over time, error cancels.
Bottom line: fMRI isn't going away. In fact, current trends are going in quite the opposite direction. What are we to do in that case? Lambast functional imaging for its limitations? Deride the whole field because of the most vocal fools? Or, should we proactively refine our methods to make it better than before? I think that is what the Vul paper accomplishes - another step on the path to better science. We need more critical views just like it to mature as a field.
What an awesome paper!
One question though (I'm new to this blog): why the photos from Ekman's work? Was that a big statistical flub too?
Thanks Neurocritic!
Jess
The reason for the Ekman faces was primarily as a joke, not because there was anything intrinsically wrong with them. Some of the papers in the meta-analysis did use the Ekman faces as stimuli, so my positioning of the images made reference to that, as well as to the mixed emotions that people might have about the paper.
I can be a bit sarcastic at times...
Thanks for visiting, Jess!
I love this. People are slating the whole social neuroscience field based on one paper with seemingly little knowledge. The paper is simply wrong on a lot of respects. For instance the cite the Singer et al paper on empathy as showing a spurious correlation - yet all the main findings are not based on correlations at all (rather co-activation of areas for self and other pain obtained from ANOVAs with proper corrections for multiple comparisons). There are many other problems with this paper - not least that the emthod for dealing for multiple comparisons described by the authors as being used by social neuroscientists is WAY out of date (and was never really in date). The way this is handled is throuh the use of gaussian random field theory. I think you should be sceptical of the sceptics as well as the social neuroscience researchers...
To the Anonymous of January 11, 2009 10:09 AM,
As critical as The Neurocritic can be, I don't think I'm trashing the entire field of Social Neuroscience.
Ed Vul addresses this and other concerns of yours on his website (reproduced below).
Q: Interpretations of your paper are varied (some suggest that this critique is damning to all social science, social neuroscience, or these articles in particular). If I believe your critique, what conclusion should I walk away with about these fields and studies?
A: We focus on correlations between fMRI measures of the brain and individual differences in personality and emotion. The field of social neuroscience extends far beyond these studies. Of the studies we sampled, just under half of the people reported using what we consider to be appropriate analyses. So we are certainly not suggesting that all (or even close to all) of the papers we surveyed are wrong. Moreover, some of the studies that used non-independent analyses to obtain correlation measures also reported findings that did not involve the localization of individual difference measures in the brain, and we are saying nothing about those other findings.
Finally, with respect to the set of studies that used the non-independent correlation analyses we criticize, we argue that the actual reported correlation values are biased, inflated, and thus, it might be reasonable to say, pretty meaningless. However, this does not mean that the true correlation is therefore zero. Some of the studies do provide evidence suggesting that there is probably some nonzero correlation there. We don't think that a correlation of 0.1 is nearly as important as a correlation of 0.8, but it could still have scientific value.
Our main point, however, is more positive: there are several transparent ways in which accurate estimates of the correlation may be obtained in these studies, and in future studies approaching the same problems. We argue that this is what should be done, even on the data that have already been published.
Please see http://www.bcn-nic.nl/replyVul.pdf for a reply by some of the authors that are criticized.
Interested parties can read Voodoo Counterpoint for an excerpt of that rebuttal by Jabbi, Keysers, Singer, and Stephan (entire PDF).
For those interested, you can find our response to this reply here:
http://edvul.com/voodoorebuttal.php
Cheers, Ed.
There is also a rejoinder from Ed Vul here:
http://edvul.com/voodoorebuttal.php
Here is our invited reply to Vul et al.
http://www.scn.ucla.edu/pdf/LiebermanBerkmanWager(invitedreply).pdf
Lieberman, Berkman, and Wager - The pointer to your rebuttal is much appreciated. I did link to it in a new post.
For anyone interested, there was a public debate on Voodoo Correlations last fall at the Society of Experimental Social Psychologists between Piotr Winkielman (one of the authors on the Voodoo paper) and myself (Matt Lieberman). The debate has been posted online.
http://www.scn.ucla.edu/Voodoo&TypeII.html
Matt - Thanks for providing the link. I posted the videos here:
Voodoo and Type II: Debate between Piotr Winkielman and Matt Lieberman
Post a Comment
<< Home