Pages

Saturday, March 31, 2012

How vulnerable is the field of cognitive neuroscience to bias?



That's the opening sentence in an abstract by Joshua Carp, who will be presenting tomorrow in Slide Session 3 at the 2012 CNS Meeting in Chicago. The question caught my eye in light of the Psych Science paper on False-Positive Psychology ["undisclosed flexibility in data collection and analysis allows presenting anything as significant"] and the recent blog post by Dr Daniel Bor on The dilemma of weak neuroimaging papers.


Slide Session 3

Sunday, April 1, 10:00 am - 12:00 pm, Red Lacquer Room

Estimating the analytic flexibility of functional neuroimaging: Implications for uncertainty and bias in cognitive neuroscience

Joshua Carp; University of Michigan

How vulnerable is the field of cognitive neuroscience to bias? According to a recent mathematical model, the potential for scientific bias increases with the flexibility of analytic modes. In other words, the greater the range of acceptable analysis strategies, the greater the likelihood that published research findings are false. Thus, the present study sought to empirically estimate the analytic flexibility of fMRI research. We identified five pre-processing decisions and five modeling decisions for which two or more analysis strategies are commonly used in the research literature. By crossing each of these strategies and decisions, we identified 4,608 unique analysis pipelines. Next, we applied each of these pipelines to a previously published fMRI study of novelty detection in an auditory oddball task. We found that activation estimates were highly dependent on methodological decisions: contrasts that yielded significant positive activation under one pipeline were associated with non-significant positive activation or even with negative activation under other pipelines. Some analysis decisions contributed more to this variability more than others, and each decision exerted a unique pattern of variability across the brain. The effects of a given decision also varied across contrasts, subjects, and other analysis parameters. In sum, we found considerable quantitative and qualitative variability across analysis pipelines, suggesting that the results of cognitive neuroimaging experiments may be more uncertain than they seem. Indeed, given a supercomputer, a sufficiently motivated analyst might observe almost any imaginable pattern of results.


Reference

Simmons JP, Nelson LD, Simonsohn U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 22:1359-66.

5 comments:

  1. I love this. Wish I could be at the conference.

    ReplyDelete
  2. Bias can unintentionally enter analyses. For example, testing multiple hypotheses on a data set until something significant is found can sometimes skew toward false positive findings (though it's good to explore data to figure out the range of things it does or does not show).

    Still, the claims of this abstract are more an issue of academic fraud than bias. For this to be a real issue, researchers would need to try multiple pipelines of processing steps and select the ones that give optimal results while ignoring others. From my experience, there are researchers who don't really understand the purpose of various processing choices and always use the same method (with the same tweak or two) and others who may try multiple methods to make sure the results aren't sensitive to processing choices. If they are sensitive to choice, try to figure out what is causing that sensitivity and select the methodologically proper processing pipeline.

    This abstract seems to imply that all processing choices are considered equally good in every situation. This is false. Given certain types of data issues, there are choices that will be more likely to give false positive results (for example no motion correction when there is task-correlated motion).

    If anyone has the unethical desire to run every processing permutation through a supercomputer and cherry pick the best one, they're wasting their time. A random number generator and photoshop are much cheaper.

    ReplyDelete
  3. bsci - Someone brought up a similar point during the Q&A: not all processing steps are equally optimal, so did the speaker try weighting them? The answer was along the lines of: not all "essential" steps are used in all papers.

    However, the funniest exchange was this one:

    Q - If all neuroimaging studies are false, how about your talk?
    A - I don't like to be reminded...

    ReplyDelete
  4. Eternally anonymousApril 05, 2012 7:27 PM

    All I can say is that the field of neuroimaging will not be the same after it finally attracts the attention of John Ioannidis. (Google it yourself and despair.)

    ReplyDelete
  5. Do you remember the Gauthier/Kanwisher debate? Well, there was a lot of that going on during that debate (if you reread some of those papers). Not sure if it was unethical, certainly partisan.
    That said, this analysis seems rather naive to me. When you have a strong and replicable effect it comes out regardless of what you do, typically. If you have nothing, sure, you can make it come out significant or not by tweaking things.
    So, is Mr. Carp telling us he would not pick the method that "works" was his tenure on the line? Change the reward system, and people's behavior will change accordingly. If somebody's livelihood depends on a study turning out or not, people will make the study work one way or the other. No brainer there.

    ReplyDelete