Wednesday, December 27, 2006

Lax Editorial Standards At Nature Neuroscience?

You be the judge...
Editorial
Nature Neuroscience - 10, 1 (2007)
Setting the record straight

The discovery of serious errors in two recent papers in the journal leads to lessons for authors, referees and editors.

. . .

The first correction involves a Brief Communication (Makara et al., 2005) reporting that inhibition of the enzyme that breaks down the endocannabinoid 2-arachidonoylglycerol enhances retrograde signaling in the hippocampus. The authors concluded that 2-arachidonoylglycerol is important for synaptic plasticity and that the enzyme is a possible drug target, in part because one of the putative inhibitors tested appeared to be specific for the enzyme. They subsequently discovered that
the commercial preparation of this drug was contaminated. When the contaminant was eliminated, the effect disappeared. . . .

The second correction is more complex. The original article (Grill-Spector et al., 2006) reported high-resolution fMRI measurements in the fusiform face area (FFA), a region of the visual cortex that responds more to faces than to other visual stimuli. The authors drew two conclusions: that the FFA is heterogeneous, in that the degree of selectivity varies over the region, and—more remarkably—that the FFA contains some voxels that are highly selective for object categories other than faces. After the paper was published, two groups wrote to point out flaws in the analysis. One letter (Simmons et al., 2007) noted that
the authors used a formula for selectivity that erroneously assigns high selectivity values to voxels with negative responses to nonpreferred categories, causing a substantial overestimate in selectivity for all object categories.

Another group (Baker et al., 2007) spotted a more subtle flaw:
the analysis used to demonstrate selectivity for particular categories did not distinguish between random variation and replicable effects reflecting neural tuning. Random variation can cause some voxels to respond more to some categories than to others. To demonstrate that such differences reflect neural selectivity requires an appropriate statistical analysis, for instance cross-validation across independent datasets. The original paper seemed to report the results of such an analysis—that voxel selectivity was highly correlated between even and odd scans. However, communication with the authors revealed that this analysis had excluded voxels whose responses were negatively correlated across the two sets of scans, a detail that was omitted from the paper. This restriction could falsely increase consistency across scans. Indeed, when the authors redid their analysis without it, the selectivity for nonface objects was not replicated from one set of scans to the next.
My focus is on the second paper, because (1) I can't say anything about 2-arachidonoylglycerol, and (2) getting a contaminated drug from a vendor is not the authors' fault.

The editorial continues:
The authors of the article acknowledge both errors in their correction (Grill-Spector et al., 2007). When these errors are fixed, the most interesting conclusion of the paper—that the FFA contains voxels highly selective for nonface objects—is no longer supported.
However, the editors refuse to retract the paper:
In both cases, after considerable discussion with colleagues, we have decided to publish a correction to the original paper rather than a retraction, even though it seems likely that neither paper would have been published in Nature Neuroscience had the errors been identified and corrected during the review process. Retractions were deemed inappropriate because they would have removed from the record some valid data and conclusions that are likely to be useful to specialists in the field, and it seemed unlikely that the authors would be able to publish these data elsewhere.
Well, boo hoo!! The authors can't publish their compromised data anywhere except in Nature Neuroscience. Let's all send our unpublishable data to Nature Neuroscience! We can also appeal to the archenemy journal, Science, to publish results that contradict articles in the Nature family of journals...

The Neurocritic started this blog as a means of quality control in the field, a way to point out errors, exaggerations, misinterpretations of data, or as the masthead says,

Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology

The Nature Neuroscience editorial concludes:
...the ultimate responsibility for recruiting referees with appropriate expertise lies with the editors, and in this case we clearly should have consulted referees with stronger mathematical expertise.

Nonetheless, it is common practice in functional imaging (and indeed in other areas of neuroscience) to analyze experiments by
selecting data according to some criteria and then plotting the average response, without testing an independent data set to ensure that the selection criteria have not merely picked out random variation in a particular direction.
How many other prominent articles are of dubious accuracy? The saga continues...

Subscribe to Post Comments [Atom]

10 Comments:

At December 28, 2006 8:48 PM, Anonymous Anonymous said...

you are reporting errors that were caught (i.e., editorial standards and the scientific communitity spotted the problem (case #2)). find a problem that is not reported already and I might be more impressed.

 
At December 28, 2006 8:52 PM, Blogger The Neurocritic said...

Gee, where should I start? Just browse through the archives of this blog and you'll find plenty...

 
At January 01, 2007 4:49 AM, Anonymous Anonymous said...

try "letter to the editor", your commentary is superficial at best and, if you had a substantial criticism it should be voiced to the journal. This is NOT an example of lax editorial standards. This is how science works. You get the most qualified reviewers you can for an article and sometimes, something important is missed. If so, you count on the scientific communitity to speak up, which has happened. this is nothing new and is seen in all of the journals.

 
At January 02, 2007 10:32 AM, Blogger The Neurocritic said...

Which journals? What are some examples? Did those journals withdraw the erroneous articles, unlike Nature Neuroscience? That was my main point, which you seem to have missed.

 
At January 02, 2007 8:01 PM, Anonymous Anonymous said...

OK, so I'm not the same person as the other anonymous, but I have to say that I see points in favor of both the Neurocritic and the other anonymous, who sounds a lot like somebody in the editing busines.

I think the first valid point of the Neurocritic is precisely that when a paper is shown to be fatally flawed after it has appeared, then it should be retracted. Period. Although this probably really smarts, I kind of doubt that Grill-Spector's career is going to go away.

A second valid point that is less strongly made (but, I think at least as important) is that the results of many (most?) functional imaging studies really do depend on careful and valid analyses of the data, and, to be completely honest about it, most people doing functional imaging work wouldn't recognize flaws of this kind if they were hit over the head repeatedly with them. But, as the Neurocritic has shown on a fairly regular basis, the problem is even worse than that: repeatedly, journals have allowed authors to go almost completely beyond their data in stating conclusions. Indeed, when I read papers like this these days, I find it generally useful to assume that the grandest claim is almost certainly wrong, even if the data themselves could say something interesting about the phenomenon in question.

That said, I agree with the anonymous editor that the problem is not with lax editorial standards per se. You really are substantially at the mercy of your reviewers, and it is not easy to get the best and most qualified reviewers to say "yes" nearly as often as you would like. Or, worse, even when you get exactly the right reviewers with the perfect expertise, it is (pretty much by definition) tough to second guess them when they all say "this is great". To a first approximation, you don't get into Science or Nature unless you get unanimously enthusiastic reviews, which is what makes problems like these so maddening (assuming you got the right reviewers, of course).

I honestly don't know how to fix that.

 
At January 03, 2007 6:24 PM, Blogger The Neurocritic said...

Thanks for your comments, Anonymous #3. You're right, manuscript reviews are only as good as the referees. At some journals, however, the editor wields considerable editorial discretion and can override reviewers' recommendations (either to publish or to reject). Of course, that's not to say it happened at NN. The likely scenario is what you suggested: two very enthusiastic recommendations from reviewers who were not savvy in fMRI analysis techniques. But, as the editorial states,

...the ultimate responsibility for recruiting referees with appropriate expertise lies with the editors, and in this case we clearly should have consulted referees with stronger mathematical expertise.

 
At January 05, 2007 4:28 AM, Anonymous Anonymous said...

I wonder what's expected of editors and reviewers. In mathematics, presumably reviewers have to check the validity of proofs? Do reviewers of, e.g., fMRI studies check the maths used in models?

 
At January 05, 2007 4:44 PM, Blogger The Neurocritic said...

It does seem that at least one reviewer should be able to see flaws in data analysis. But as Anonymous #3 said,

...the results of many (most?) functional imaging studies really do depend on careful and valid analyses of the data, and, to be completely honest about it, most people doing functional imaging work wouldn't recognize flaws of this kind if they were hit over the head repeatedly with them.

Plus, the formula wasn't very complicated, as explained by Simmons et al. (2007):

Grill-Spector calculated voxel selectivity using the following formula: Selectivity = [Preferred - Nonpreferred] / [Preferred + |Nonpreferred|]. In this formula, "Preferred indicates the amplitude of the category that yielded the maximal response and Nonpreferred indicates the average amplitude of other categories" (p. 1184), with a selectivity value of 1 indicating maximum selectivity, and 0 indicating that a voxel has no preference for any of the stimulus categories tested. This formula is similar to the standard formula used in many electrophysiological studies, with the exception that the absolute value of the average nonpreferred response was used in the denominator. Unlike spike rate data, however, negative values are commonly observed in fMRI, and this has significant consequences when used with this formula.

 
At January 11, 2007 11:39 AM, Anonymous Anonymous said...

I don't understand what difference it makes whether it was officially retracted or not; everyone involved in the field knows what happened, and Kalanit's reputation has taken a significant hit in either case. Nobody is going to cite this paper without knowing that the data is incorrect (although they are right in suggesting that there is still interesting data in the paper). It's not like anybody is going to accidentally count this as a Nature Neuroscience publication when she goes up for tenure.

The only real problem here is that Kalanit and Nature Neuroscience pushed this paper through despite warnings from Nancy Kanwisher and others that her data was highly suspicious. She had definitely been warned in advance that she was probably analyzing it incorrectly, and my understanding is that Nancy even told her that she could replicate the results using noise data from outside the brain before it got published.

 
At January 12, 2007 2:55 PM, Blogger The Neurocritic said...

Thanks for an insider's view, Anonymous #4. It provides an example of people (even scientists) believing want they want to believe, evidence to the contrary. Or else an example of the pressure in academia to publish at all costs.

I don't understand what difference it makes whether it was officially retracted or not; everyone involved in the field knows what happened...

But that doesn't mean that people outside the fMRI/face recognition/vision subfield will know about it. If one were just browsing at News and Views articles and came across Fine structure in representations of faces and objects, there is no indication that glowing statements about the significance of high-resolution fMRI are based on faulty findings.

For those without a subscription:

When Galileo looked at the planets with his telescope and discovered the moons of Jupiter, he transformed our understanding of the cosmos. When van Leeuwenhoek looked at pond water through his microscope, he discovered a world that transformed our understanding of life. High-resolution imaging of brain function now promises to transform our understanding of how neural activity represents information—the physical basis of knowledge.

 

Post a Comment

<< Home

eXTReMe Tracker