Wednesday, March 24, 2010

Voodoo and Type II: Debate between Piotr Winkielman and Matt Lieberman


"Voodoo correlations in social neuroscience" was the original title of a paper that first caused a stir in late December 2008, when a manuscript accepted by Perspectives on Psychological Science was made available on the authors' websites. Vul, Harris, Winkielman, and Pashler produced a "bombshell of a paper" that questioned the implausibly high correlations observed in some fMRI studies in the field of Social Neuroscience. Ed Vul et al. surveyed the authors of 54 papers to determine the analytic methods used. All but three of the authors responded to the survey, and 54% admitted to using faulty methods to obtain their results:
More than half acknowledged using a strategy that computes separate correlations for individual voxels, and reports means of just the subset of voxels exceeding chosen thresholds. We show how this non-independent analysis grossly inflates correlations, while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample. In addition, we argue that other analysis problems likely created entirely spurious correlations in some cases.
For background reading I suggest starting with Voodoo Correlations in Social Neuroscience. Given the paper's inflammatory title and its naming of names, the accused researchers did not take the criticism lying down (see Voodoo Schadenfreude).

Here we have a public debate between Dr. Piotr Winkielman, one of the authors of the Voodoo paper (Vul et al. 2009, PDF), and Dr. Matthew Lieberman, one of the accused (rebuttal: Lieberman et al. 2009, PDF) at the 2009 meeting of the Society of Experimental Social Psychologists. Dr. Lieberman has made these videos and papers available on his website, and I thank him for drawing my attention to them.

The Voodoo Debate Continues...




Piotr Winkielman, opening remarks (21:12)





Matt Lieberman, opening remarks (19:03) [better view of slides]





Piotr Winkielman, rebuttal (10:22)





Matt Lieberman, rebuttal (9:40) [better view of slides]


Lieberman did strike back (Lieberman et al., 2009), and attacked Vul et al. for cherry picking their data and for inappropriate use of statistics:
However, they imply that post hoc reporting of correlations constitutes an invalid inferential procedure, when in fact it is a descriptive procedure that is entirely valid. In addition, the quantitative claims that give their arguments the appearance of statistical rigor are based on problematic assumptions. Thus, it is ironic that Vul et al.’s article—which critiques social neuroscience as having achieved popularity in prominent journals and the press due to shaky statistical reasoning—itself achieved popularity based on problematic claims about the process of
statistical inference.

Additional Reading:

Voodoo correlations in social brain studies

Voodoo Gurus

"Voodoo Correlations" in fMRI - Whose voodoo?

The paper formerly known as "Voodoo Correlations in Social Neuroscience"


References

Lieberman, M., Berkman, E., & Wager, T. (2009). Correlations in Social Neuroscience Aren't Voodoo: Commentary on Vul et al. (2009) Perspectives on Psychological Science, 4 (3), 299-307 DOI: 10.1111/j.1745-6924.2009.01128.x

Vul E, Harris C, Winkielman P, Pashler H (2009). Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition.

Subscribe to Post Comments [Atom]

Thursday, January 15, 2009

Voodoo Schadenfreude


Voodoo doll, by Sickboy

Most hip researchers in cognitive neuroscience and human brain imaging have already heard about the critical new journal article with the incendiary title: "Voodoo Correlations in Social Neuroscience" (Vul et al., in press - PDF). If you haven't, you can read a comprehensive summary here and a micro version here.

Avenging Voodoo Schadenfreude

Nature News ran a piece on the debate and the burgeoning backlash from an angry mob of researchers whose methods were derided as fatally flawed. Some of these authors (and perhaps some Nature editors) were miffed that bloggers wrote about the preprint when it was first made available to the public, as if that somehow violates the scientific method:
The swift rebuttal was prompted by scientists' alarm at the speed with which the accusations have spread through the community. The provocative title — 'Voodoo correlations in social neuroscience' — and iconoclastic tone have attracted coverage on many blogs, including that of Newsweek. Those attacked say they have not had the chance to argue their case in the normal academic channels.

"I first heard about this when I got a call from a journalist," comments neuroscientist Tania Singer of the University of Zurich, Switzerland, whose papers on empathy are listed as examples of bad analytical practice. "I was shocked — this is not the way that scientific discourse should take place." Singer says she asked for a discussion with the authors when she received the questionnaire, to clarify the type of information needed, but got no reply.
Based on the statements above, it would seem that Dr. Singer and her colleagues (Jabbi, Keysers, and Stephan) are not keeping up with the way that scientific discourse is evolving. [See Mind Hacks on this point as well.] Citing "in press" articles in the normal academic channels is a frequent event; why should bloggers, some of whom are read more widely than the authors' original papers, refrain from such a practice? Is it the "read more widely" part? To their credit, however, they commented in blogs and publicized the link to a preliminary version of their detailed reply.....although calling it "summary information for the press" assumes that "the press" is extremely knowledgeable about neuroimaging methodology and statistical analysis.

To learn more about the evolution of scientific discourse, let me briefly introduce you to the world of social media (e.g., FriendFeed, Facebook, and even Twitter). You can join the discussion at FriendFeed's Science 2.0 Room, which is "For people interested in Science 2.0 and Open Science, especially the use of online tools to do science in new ways." Although one needs a Facebook account to view these, Facebook groups include Neuroscience ROCKS (4,006 members), Neuroscience and Brain Studies (3,194 members), and Cognitive Neuroscience Society (2,147 members). Some social neuroscientists are, well, social enough to get it, because Columbia University Social Cognitive Affective Neuroscience Lab has a posse (79 fans, albeit inactive ones). And believe it or not, NIH now has Twitter feeds for press releases and funding announcements. As for individuals who are powerhouse resources on the future of scientific communication, I would recommend reading BoraZ (blog, Twitter), who is organizing the Jan. 16-18 ScienceOnline’09 conference, and Björn Brembs on scientific publishing and misuse of the Impact Factor.

All is not puppies and flowers in the world of science social media, however. Proponents rarely acknowledge that many companies and institutions block access to these sites, so at present their usefulness is limited for many in the scientific community. A more obvious issue is that these sites can turn into an enormous time sink.

Now back to Nature News and the voodoo backlash. In an ironic twist, one of the 'red listed' papers (Singer et al., 2006), published in Nature, was publicized as a study on Schadenfreude. Here's the Editor's Summary [which was covered by The Neurocritic three years ago]:
I feel your pain

Humans have the capacity to empathize with the pain of others, but we don't empathize in all circumstances. An experiment on human volunteers playing an economic game looked at the conditional nature of our sympathy, and the results show that fairness of social interactions is key to the empathic neural response. Both men and women empathized with the pain of cooperative people. But if people are selfish, empathic responses were absent, at least in men. And it seems that physical harm might even be considered a good outcome — perhaps the first neuroscientific evidence for schadenfreude.
Nature and Science have a long history of issuing overblown press releases that extrapolate the findings of a single, quite flawed [if you side with Vul et al.] neuroimaging paper to yield the revelation of deep truths about human social interactions (among other things). The Nature News piece, Brain imaging studies under fire (Abbott, 2009), continues:
The article is scheduled for publication in September, alongside one or more replies. But the accused scientists are concerned that the impression now being established through media reports will be hard to shake after the nine-month delay. "We are not worried about our close colleagues, who will understand the arguments. We are worried that the whole enterprise of social neuroscience falls into disrepute," says neuroscientist Chris Frith of University College London, whose Nature paper [Singer et al., 2006] on response to perceived fairness was called into question.
So media reports heavily promoted the field, and media reports will unduly tarnish the field.1

NewScientist provides a clear instance of this, in what is surely a textbook exemplar of a pot-kettle moment.
Doubts raised over brain scan findings

14 January by Jim Giles

SOME of the hottest results in the nascent field of social neuroscience, in which emotions and behavioural traits are linked to activity in a particular region of the brain, may be inflated and in some cases entirely spurious.
But one doesn't have to look very far to find NewScientist headlines like these (I just searched the archives of this blog):
Watching the brain 'switch off' self-awareness

Do games prime brain for violence?

Starving is like ecstasy use for anorexia sufferers

Mirror neurons control erection response to porn

Source of ‘optimism’ found in the brain
So the NS editorial below comes across as a wee bit hypocritical, even though it eventually acknowledges their own role in promoting "sexy-sounding" brain scan results.
Editorial: What were the neuroscientists thinking?

14 January 2009

IT IS two centuries since the birth of Charles Darwin, but even now his advice can be spot on. The great man attempted a little neuroscience in The Expressions of the Emotions in Man and Animals, published in 1872, in which he discussed the link between facial expressions and the brain. "Our present subject is very obscure," Darwin warned in his book, "and it is always advisable to perceive clearly our ignorance."

Modern-day neuroscience might benefit from adopting a similar stance. The field has produced some wonderful science, including endless technicolor images of the brain at work and headline-grabbing papers about the areas that "light up" when registering emotions. Researchers charted those sad spots that winked on in women mourning the end of a relationship, the areas that got fired up when thinking about infidelity, or those that surged in arachnophobes when they thought they were about to see a spider. The subjective subject of feelings seemed at last to be becoming objective.

Now it seems that a good chunk of the papers in this field contain exaggerated claims, according to an analysis which suggests that "voodoo correlations" often inflate the link between brain areas and particular behaviours.

Some of the resulting headlines appeared in New Scientist, so we have to eat a little humble pie and resolve that next time a sexy-sounding brain scan result appears we will strive to apply a little more scepticism to our coverage.
Um, no joke guys.

On the other hand, Sharon Begley at Newsweek is one science writer who hasn't been entirely convinced by the colorful brain images. On March 10, 2008, she wrote:

Brain-imaging studies have proliferated so mindlessly (no pun intended) that neuroscientists should have to wear a badge pleading, “stop me before I scan again.” I mean, does it really add to the sum total of human knowledge to learn that the brain’s emotion regions become active when people listen to candidates for president? Or that the reward circuitry in the brains of drug addicts become active when they see drug paraphernalia?

Therefore, her recent commentary on the brouhaha does not come across as an opinion that was invented yesterday:
The 'Voodoo' Science of Brain Imaging

If you are a fan of science news, then odds are you are also intrigued by brain imaging, the technique that produces those colorful pictures of brains “lit up” with activity, showing which regions are behind which behaviors, thoughts and emotions. So maybe you remember these recent hits... [gives many examples here] . . . the list goes on and on and on. And now a bombshell has fallen on dozens of such studies: according to a team of well-respected scientists, they amount to little more than voodoo science.

The neuroscience blogosphere is crackling with—so far—glee over the upcoming paper, which rips apart an entire field: the use of brain imaging in social neuroscience.....

Before concluding, I will state that I am not a complete neuroimaging nihilist. For examples of this view, see Coltheart, 2006 and especially van Orden and Paap, 1997 (as quoted by Coltheart):
What has functional neuroimaging told us about the mind so far? Nothing, and it never will: the nature of cognition is such that this technique in principle cannot provide evidence about the nature of cognition.
So no, I am not a Jerry Fodor Functionalist. I do believe that learning about human brain function is essential to learing about "the mind," that the latter can be reduced to the former, that fMRI can have something useful to say, and (more broadly, in case any anti-psychiatry types are listening) that psychiatric disorders are indeed caused by faulty brain function. But there's still a lot about fMRI as a technique that we don't really know. The best-practice statistical procedures for analyzing functional images is obviously a contentious issue; there is no consensus at this point. Our knowledge of what the BOLD signal is measuring, exactly, is not very clear either [see the recent announcement in J. Neurosci. that "BOLD Signals Do Not Always Reflect Neural Activity."] The critics among us2 are not trying to trash the entire field of social neuroscience (or neuroimaging in general). Some of us are taking concrete steps to open a dialogue and improve its methodology, while others are trying to rein in runaway interpretations.


ADDENDUM: via Pieces of Me, I've just discovered the link to PsyBlog's detailed discussion of the Coltheart paper: Can Cognitive Neuroscience Tell Us Anything About the Mind?

Footnote

1 It isn't even necessary to quote the appropriate metaphorical expression here.

2 By "us" I mean scientists: people who are students and post-docs and colleagues of esteemed investigators like Dr. Frith.

References

Abbott A (2009). News: Brain imaging studies under fire. Social neuroscientists criticized for exaggerating links between brain activity and emotions. Nature 457:245.

Jabbi M, Keysers C, Singer T, Stephan KE. (in preparation). Rebuttal of "Voodoo Correlations in Social Neuroscience" by Vul et al. – summary information for the press. PDF

Singer T, Seymour B, O'doherty JP, Stephan KE, Dolan RJ, Frith CD. (2006) Empathic neural responses are modulated by the perceived fairness of others. Nature 439:466-9.

Vul E, Harris C, Winkielman P, Pashler H (2009). Voodoo Correlations in Social Neuroscience. Perspectives on Psychological Science, in press. PDF

Subscribe to Post Comments [Atom]

Tuesday, January 27, 2009

Voodoo Gurus


Marie Laveau's House of Voodoo [by OZinOH]

Does anyone else miss having the Society for Neuroscience conference in New Orleans as much as I do? The 2003 meeting was great fun, but there seems to be no plan to return any time soon. The 2009 SFN meeting was supposed to be in New Orleans, but it will be held in Chicago instead. Other large conferences have returned to the Ernest N. Morial Convention Center, most notably the American Heart Association in November 2008, with 30,000 attendees.

On another nostalgic note, this blog turns 3 today. To mark the occasion, we'll take a peek back into The Neurocritic's archives, because what's old is new again.

New Voodoo Correlations: Now Taking Nominations!

By now, most neuroimagers and cognitive neuroscientists have heard about the controversial (some would say inflammatory) new paper by Ed Vul and colleagues on Voodoo Correlations in Social Neuroscience (PDF), summarized in this post.1 In the article, Vul et al. claimed that over half of the fMRI studies that were surveyed used faulty statistical techniques to analyze their data:
...using a strategy that computes separate correlations for individual voxels, and reports means of just the subset of voxels exceeding chosen thresholds. We show how this non-independent analysis grossly inflates correlations, while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample.
Needless to say, authors of the criticized papers were not pleased, and some have posted rebuttals (Jabbi et al. in preparation, PDF). Vul and colleagues responded to that rebuttal, but a new invited reply by Lieberman et al. (submitted, PDF) has just popped up. Here are some highlights from the abstract:
...Vul et al. incorrectly claim that whole-brain regression analyses use an invalid and “non-independent” two-step inferential procedure. We explain how whole-brain regressions are a valid single-step method of identifying brain regions that have reliable correlations with individual difference measures. ... Finally, it is troubling that almost 25% of the “non-independent” correlations in the papers reviewed by Vul et al. were omitted from their own meta-analysis without explanation.
An independent observer (Dr Justin Marley at The Amazing World of Psychiatry) made a point related to the latter one in his critique of Vul et al.:

1. The methodology is opaque - in particular the method of identifying relevant papers. The authors have criticised a number of the imaging studies similarly.

2. In my opinion there is possibly a selection bias in this paper - a small number of all possible papers are selected but due to the opacity of the methodology section we are unable to ascertain the nature of a possible selection bias. The authors criticise other researchers for identifying voxel activity based on correlation with the behaviour/phenomenological experience in question i.e. selection bias.

3. If there is a selection bias then the authors would have selected those papers which support their argument - thus generating a result similar to the ‘non-independent error’. Furthermore they have produced ‘visually appealing’ graphs for their data which ‘provide reassurance’ to the ‘viewer that s/he is looking at a result that is solid’.

Vul et al. are fully capable of responding to these objections, and I'm sure we'll see a rebuttal from them shortly. What I would like to do here is to mention 6 previous posts from The Neurocritic's archives (which all happen to cover the field of social neuroscience):

Mental as Anything

"The Disturbing World of Implicit Bias..."

The Trust Game

Mentalizing Mentalizing

Borderline … feels like I'm goin' to lose my mind

Who Can You Trust?

Although these posts don't engage in a rigorous deconstruction of analytic techniques à la Vul et al., they do ask some questions about the methods and about how the results are framed (i.e., over-interpreted). But first, let's reiterate that
The critics among us are not trying to trash the entire field of social neuroscience (or neuroimaging in general). Some of us are taking concrete steps to open a dialogue and improve its methodology, while others are trying to rein in runaway interpretations.
The rebuttals to Vul et al. emphasize that the latter's analytic objections are by no means unique to social neuroscience. Vul et al. acknowledged this, albeit not in a prominent way. The criticized authors are also peeved that "the media" (mostly bloggers) have contributed to a "sensationalized" atmosphere before the paper has been published. However, as previously noted in Mind Hacks,
The paper was accepted by a peer-reviewed journal before it was released to the public. The idea that something actually has to appear in print before anyone is allowed to discuss it seems to be a little outdated (in fact, was this ever the case?).
Why do I have a problem with some papers in social neuroscience? Let's take the study by Mitchell et al. (2006), which qualified for its own special category in the Vul et al. critique:
* study 26 carried out a slightly different, non-independent analysis: instead of explicitly selecting for a correlation between IAT and activation, they split the data into two groups, those with high IAT scores and those with low IAT scores, they then found voxels that showed a main effect between these two groups, and then computed a correlation within those voxels. This procedure is also non-independent, and will inflate correlations.)
The participants in that study made judgments about hypothetical people who were either similar or dissimilar to themselves (on liberal vs. conservative sociopolitical views). Two regions of medial prefrontal cortex were identified: ventral mPFC was the "Like Me" area and dorsal mPFC was the "Not Like Me" area. Even if we believe those inferences about mentalizing, how are we to interpret these graphs?

Does the first bar graph (left) mean that liberals are a little less hostile to conservatives than vice versa? Does the other bar graph (right) mean that the “Not Like Me” area in liberals is equally activated by “self” and “conservative other”?? What DOES it all mean?
After the scanning session was over, the participants...
...completed a "liberal-conservative" IAT [Implicit Association Test]2 that used photos of the hypothetical persons presented for "mentalizing" judgments in the scanning session.
The authors used the IAT to retroactively assign subjects to "like liberal" and "not like liberal" groups. As the graph illustrates, only 3 subjects (out of 15 total) actually had RT effects indicating they might have a closer affinity to the conservative "other" (if you believe the IAT).
Very well then. The researchers should have recruited actual conservative Christians for a valid sample of conservative students.

But the Voodoo Guru Award goes to...



...King-Casas et al. (Science 2008) for The Rupture and Repair of Cooperation in Borderline Personality Disorder!!

This paper examined how well individuals with borderline personality disorder (BPD) trusted others in an economic exchange game (called, conveniently enough, the Trust Game). In this game, one player (the Investor) gives a sum of money to the other player (the Trustee). The investment triples, and the Trustee decides how much to give back to the Investor. Relative to the control group, the BPD group was more likely to make a small repayment after receiving a small investment. This reflected a lack of cooperation (or "coaxing" behavior) designed to induce the Investors to trust their partners.

Borderline … feels like I'm goin' to lose my mind
So why do I feel like I'm goin' to lose my mind?3 As suggested by a colleague, the present paper 1) makes liberal use of reverse inference and 2) reeks of fishing.4 In my view, the real trouble arises when the authors try to explain what bits of the brain might be implicated in the lack of trust shown by players with BPD. It's the insula! [and only the insula]. Why is that problematic? We shall return to that question in a moment.

In the study of Sanfey et al., unfair offers were associated with greater activity in bilateral anterior insula, dorsolateral prefrontal cortex, and anterior cingulate cortex, with the degree of insular activity related to the stinginess of the offer. A similar relationship was observed here in the controls, but not in the BPD patients. Taking a step back for a moment, we see differences between control and BPD participants (for the contrast low vs. high investment) in quite a number of places...

However, the within-group analysis in controls yielded a "small investment" effect only in bilateral anterior insula (12 voxels and 15 voxels, respectively, at p<.10). The same analysis in the BPD group yielded absolutely no significant differences anywhere in the brain! BPD participants react to small stingy investments with differential behavior (by returning a very low percentage of the investment), yet there is no area in the brain telling them to do this. Perhaps something is going on in the delay period between the investment and repayment phases, but if so we don't find out. The authors' interpretation of the null brain effect is that BPD subjects have a social perception problem (as Montague explains here), and do not respond correctly to social norm violations.
Who Can You Trust?
[Thus], the authors bypassed more general analyses comparing BPD and control brains during the point of investment and the point of repayment. Instead, the major neuroimaging result contrasted the receipt of low investment offers vs. high investment offers, as illustrated below. Control brains showed a nearly perfect linear correlation [r=-.97] between $ offer and activity in the anterior insula (expressed here as a negative correlation, because low $ offers correlated with high insula activity). Such a relationship was not observed in BPD brains...

Fig. 3 (King-Casas et al., 2008). Response of 38 healthy trustee brains and 55 BPD trustee brains to level of cooperation.

In yet another case of reverse inference, the authors concluded that:
Neurally, activity in the anterior insula, a region known to respond to norm violations across affective, interoceptive, economic, and social dimensions, strongly differentiated healthy participants from individuals with BPD.
However, many other imaging studies have shown that this exact same region of the insula is activated during tasks that assess speech, language, explicit memory, working memory, reasoning, pain, and listening to emotional music (see this figure).

Voodoo or no voodoo?

What do you think?


Footnotes

1 You can also read a quick overview here and more in-depth commentary here.

2 We'll gloss over the objections of some commenters who think the IAT itself is voodoo.

3 Other than the fact that I am not knowledgeable in behavioral game theory (see Camerer et al., 2003 for that, PDF).

4 They also reported significance (corrected using FDR procedures) at the p<.1 level. Why? This paper on Detecting signals in FMRI data using powerful FDR procedures (Pavlicova et al., 2008) recommends the standard α level (= .01 or .05).

References

Jabbi M, Keysers C, Singer T, Stephan KE. (in preparation). Rebuttal of "Voodoo Correlations in Social Neuroscience" by Vul et al. – summary information for the press (PDF).

King-Casas B, Sharp C, Lomax-Bream L, Lohrenz T, Fonagy P, Montague PR. (2008). The Rupture and Repair of Cooperation in Borderline Personality Disorder. Science 321:806-810.

Lieberman MD, Berkman ET, Wager TD. (submitted). Correlations in social neuroscience aren't voodoo: A reply to Vul et al. Invited reply submitted to Perspectives on Psychological Science (PDF).

Mitchell, J.P., Macrae, C.N., and Banaji, M.R. (2006). Dissociable Medial Prefrontal Contributions to Judgments of Similar and Dissimilar Others. Neuron 50: 655–663.

Vul E, Harris C, Winkielman P, Pashler H (2009). Voodoo Correlations in Social Neuroscience. Perspectives on Psychological Science, in press (PDF).

Subscribe to Post Comments [Atom]

Tuesday, January 13, 2009

Voodoo Counterpoint



In the spirit of American political debate shows such as Crossfire, The McLaughlin Group, Hannity & Colmes, and the classic Point/Counterpoint (both the 60 Minutes and SNL versions), The Neurocritic is pleased to present an excerpt from a rebuttal to the lively and controversial paper by Vul, Harris, Winkielman, and Pashler (PDF).

Two "anonymous commenters" tipped me off to the preliminary version of a detailed reply by some of the authors on the Vul et al. hit list. The entire document is available for download as a PDF. The abstract, main bullet points, and conclusions are reproduced below.

Rebuttal of "Voodoo Correlations in Social Neuroscience" by Vul et al. – summary information for the press

Mbemba Jabbi1, Christian Keysers2, Tania Singer3, Klaas Enno Stephan3,
(authors listed in alphabetical order)

1 National Institute of mental Health, Bethesda, Maryland, USA.

2 University Medical Center Groningen, Department of Neuroscience, University of Groningen, The Netherlands. www.bcn-nic.nl/socialbrain.html

3 Laboratory for Social and Neural Systems Research, University of Zurich, Switzerland. http://www.socialbehavior.uzh.ch/index.html

The paper by Vul et al., entitled "Voodoo correlations in social neuroscience" and accepted for publication by Perspectives on Psychological Science, claims that "a disturbingly large, and quite prominent, segment of social neuroscience research is using seriously defective research methods and producing a profusion of numbers that should not be believed." In all brevity, we here summarise conceptual shortcomings and methodological errors of this paper and explain why their criticisms are invalid. A detailed reply will be submitted to a peer reviewed scientific journal shortly.

1. The authors misunderstand the critical role of multiple comparison corrections and conflate issues pertaining to null hypothesis testing and effect size estimates, respectively.

2. The authors make strong claims on the basis of a questionable upper bound argument.

3. The authors use misleading simulations to support their claims.

4. The authors inappropriately dismiss the existence of non-significant correlations.

5. The authors' understanding of the rationale behind the use and interpretation of correlations in social neuroscience is incomplete.

6. The authors ignore that the same brain-behaviour correlations have been replicated by several independent studies and that major results in social neuroscience are not based on correlations at all.

7. The authors used an ambiguous and incomplete questionnaire.

8. The authors make flawed suggestions for data analysis.

. . .

Conclusions

In this summary, we have provided a very brief summary that exposes some of the flaws that undermine the criticisms by Vul et al. We have pointed out that brain-behaviour correlations in social neuroscience are valid, provided that one adheres to good statistical practice. It has also been emphasized that many analyses and findings in social neuroscience do not rest on brain-behaviour correlations and have been replicated several times by independent studies conducted by different laboratories. A full analysis of the Vul et al. paper and a detailed reply will be submitted to a peer-reviewed scientific journal shortly.
_____________________________________________________

A rebuttal to the rebuttal, along with commentary by The Neurocritic, all to come in the next exciting episode!

But for now, you can watch Colbert & Colmes discuss Roland Burris, who was appointed to replace Barack Obama as junior senator of Illinois.

Subscribe to Post Comments [Atom]

Wednesday, April 08, 2009

The paper formerly known as "Voodoo Correlations in Social Neuroscience"


Voodoo no more!

The paper everyone loves (or loves to hate) has a new name.1 Through a number of channels [The Chronicle of Higher Education via @vaughanbell, Ed Vul's website, and Neuroskeptic], The Neurocritic has learned that the "Voodoo Correlations" have been downgraded to mere "Puzzlingly High Correlations." The field of social neuroscience has been spared as well, because the full title of the paper is now "Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition" (PDF).

By now, most neuroimagers and cognitive neuroscientists have heard about that controversial (some would say inflammatory) paper by Ed Vul and colleagues, summarized in this post.2 In the article, Vul et al. claimed that over half of the fMRI studies that were surveyed used faulty statistical techniques to analyze their data:
...using a strategy that computes separate correlations for individual voxels, and reports means of just the subset of voxels exceeding chosen thresholds. We show how this non-independent analysis grossly inflates correlations, while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample.
Needless to say, authors of the criticized papers were not pleased. Two rebuttals were released online shortly thereafter: one by Jabbi et al. (PDF) -- here's the response to that rebuttal -- and an invited reply by Lieberman et al. (PDF).

That was back in January, after the manuscript had been accepted for publication by Perspectives on Psychological Science in late December 2008. Now [finally], the paper has been officially published in the May 2009 issue of the journal, with an introduction (PDF) by Ed Diener, the editor. Also included are six Commentaries by assorted authors and a Reply to the Commentaries by Vul et al. (PDF).

I haven't had time to read all the commentaries and rebuttals yet, but the Editor's Introduction is worth a quick mention for the issues it raises about peer review and publication in these modern times.
PREPUBLICATION DISSEMINATION

As soon as I accepted the Vul et al. article, I heard from researchers about it. People around the globe saw the article on the Internet, and replies soon appeared as well. Although my plan was to publish the article with commentary, the appearance of the article on the Internet meant that researchers read the article without the accompanying commentaries and replies that I had planned to publish with it.

In some fields such as economics, it is standard practice to widely disseminate articles before they are published, whereas in much of psychology this has been discouraged. An argument in favor of dissemination is that it speeds scientific communication in a fast-paced world where journal publication is often woefully slow. An argument against dissemination of articles before publication is that readers do not have the opportunity to simultaneously see commentary and replies. ... In the Internet age, the issue of prepublication distribution becomes all the more important because an article can reach thousands of readers in a few hours. Given the ability of the Internet to communicate so broadly and quickly, we need greater discussion of this issue.
Bloggers have discussed this specific issue months ago. For example, as noted in Mind Hacks,
The paper was accepted by a peer-reviewed journal before it was released to the public. The idea that something actually has to appear in print before anyone is allowed to discuss it seems to be a little outdated (in fact, was this ever the case?).
And The Neurocritic opined that...
[The aggrieved authors] are not keeping up with the way that scientific discourse is evolving. Citing "in press" articles in the normal academic channels is a frequent event; why should bloggers, some of whom are read more widely than the authors' original papers, refrain from such a practice? Is it the "read more widely" part?
...and in The Voodoo of Peer Review I asked:
Are blogs good or bad for the enterprise of scientific peer review? At present, the system relies on anonymous referees to provide "unbiased" opinions of a paper's (or grant's) merits. For today, the discussion will focus on peer review of papers in scientific journals...

[An] article [in Seed Magazine] begins:
"Few endeavors have been affected more by the tools and evolution of the internet than science publishing. Thousands of journals are available online, and an increasing number of science bloggers are acting as translators, often using lay language to convey complex findings previously read only by fellow experts within a discipline. Now, in the wake of a new paper challenging the methodology of a young field, there is a case study for how the internet is changing the way science itself is conducted."
Really? Maybe that's true for Biological and Social Sciences, but certainly not for Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance and Statistics (see arXiv.org)...
Diener then raises the point that online bloggers and commenters may be discussing various versions of the manuscript:
Another problem that has arisen in terms of Internet “publication” of the article and the Internet replies is that different individuals will have read different versions of the article. A single reader is unlikely to read more than one version of the article and will therefore often not see later corrections and changes. Furthermore, the commentaries are to some extent replies to different versions of the article and therefore might not be entirely on-target for the final version. This makes it difficult to fully understand the arguments because comments and replies might not be to the most current versions of articles, and it is impossible to fully correct this because the back-and-forth of revisions could continue indefinitely.
So there's never a final version of the article because revisions continue indefinitely?? Or are the accepted and final versions of the manuscript so radically different [why, I might ask] that a discussion of the initially accepted version is misleading? Or instead, is it the online commenters who are "revising" the article ad infinitum? Will Diener's editorial be clarified in a future edition, thus rendering moot my confusion in this particular post?

At any rate, Diener also discusses ethical issues surrounding the questionnaire that Vul et al. distributed to the authors. Some believed they were unwitting participants in Human Subjects research and did not give their informed consent (Diener disagreed). Not surprisingly, the "article tone" was another source of contention, and here Diener agreed to change the original "Voodoo" title. Finally, some of the aggrieved authors disputed the accuracy of the entire paper, suggesting that some (if not all) of their research was incorrectly classified. But in the end, the editor defers to the readers, who will judge the article and comments and form their own opinions.
I believe that the debate can itself stimulate useful discussions about scientific practices and communication. Further discussion of the issues should now take place in journals that are focused on imaging and neuroscience, so that the readers there can judge and benefit from the ensuing discussions.
I believe that further discussion of the issues can also take place on blogs that are focused on imaging and neuroscience. So feel free to discuss at length. Leave your questions and observations in the comments section of this post!

Footnotes

1 See The Voodoo of Peer Review for a preview of this issue.

2 You can also read a quick overview at Scan Scandal Hits Social Neuroscience, and more in-depth commentary in the post Voodoo Schadenfreude. And a comprehensive list of links about the the paper is located here.

ResearchBlogging.org

Ed Diener (2009). Editor's Introduction to Vul et al. (2009) and Comments. Perspectives on Psychological Science, 4 (3).

Complete List of References

(from PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, Vol. 4, Issue No. 3 · May 2009)

Editor's Introduction to Vul et al. (2009) and Comments
Ed Diener

Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition
Edward Vul, Christine Harris, Piotr Winkielman, and Harold Pashler

Commentary on Vul et al.'s (2009) "Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition"
Thomas E. Nichols and Jean-Baptist Poline

Big Correlations in Little Studies: Inflated fMRI Correlations Reflect Low Statistical Power--Commentary on Vul et al. (2009)
Tal Yarkoni

Correlations in Social Neuroscience Aren't Voodoo: Commentary on Vul et al. (2009)
Matthew D. Lieberman, Elliot T. Berkman, and Tor D. Wager

Discussion of "Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition" by Vul et al. (2009)
Nicole A. Lazar

Correlations and Multiple Comparisons in Functional Imaging: A Statistical Perspective (Commentary on Vul et al., 2009)
Martin A. Lindquist and Andrew Gelman

Understanding the Mind by Measuring the Brain: Lessons From Measuring Behavior (Commentary on Vul et al., 2009)
Lisa Feldman Barrett

Reply to Comments on "Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition"
Edward Vul, Christine Harris, Piotr Winkielman, and Harold Pashler


Subscribe to Post Comments [Atom]

Monday, January 05, 2009

Voodoo Correlations in Social Neuroscience



The end of 2008 brought us the tabloid headline, Scan Scandal Hits Social Neuroscience. As initially reported by Mind Hacks, a new "bombshell of a paper" (Vul et al., 2009) questioned the implausibly high correlations observed in some fMRI studies in Social Neuroscience. A new look at the analytic methods revealed that over half of the sampled papers used faulty techniques to obtain their results.

Edward Vul, the first author, deserves a tremendous amount of credit (and a round of applause) for writing and publishing such a critical paper under his own name [unlike all those cowardly pseudonymous bloggers who shall go unnamed here]. He's a graduate student in Nancy Kanwisher's Lab at MIT. Dr. Kanwisher1 is best known for her work on the fusiform face area.

Credit (of course) is also due to the other authors of the paper (Christine Harris, Piotr Winkielman, and Harold Pashler), who are at the University of California, San Diego. So without further ado, let us begin.

A Puzzle: Remarkably High Correlations in Social Neuroscience

Vul et al. start with the observation that the new field of Social Neuroscience (or Social Cognitive Neuroscience) has garnered a great deal of attention and funding in its brief existence. Many high-profile neuroimaging articles have been published in Science, Nature, and Neuron, and have received widespread coverage in the popular press. However, all may not be rosy in paradise:2
Eisenberger, Lieberman, and Williams (2003), writing in Science, described a game they created to expose individuals to social rejection in the laboratory. The authors measured the brain activity in 13 individuals at the same time as the actual rejection took place, and later obtained a self-report measure of how much distress the subject had experienced. Distress was correlated at r=.88 with activity in the anterior cingulate cortex (ACC).

In another Science paper, Singer et al. (2004) found that the magnitude of differential activation within the ACC and left insula induced by an empathy-related manipulation was correlated between .52 and .72 with two scales of emotional empathy (the Empathic Concern Scale of Davis, and the Balanced Emotional Empathy Scale of Mehrabian).
Why is a correlation of r=.88 with 13 subjects considered "remarkably high"? For starters, it exceeds the reliability of the hemodynamic and behavioral (social, emotional, personality) measurements:
The problem is this: It is a statistical fact... that the strength of the correlation observed between measures A and B reflects not only the strength of the relationship between the traits underlying A and B), but also the reliability of the measures of A and B.
Evidence from the existing literature suggests the test-retest reliability of personality rating scales to be .7-.8 at best, and a reliability no higher than .7 for the BOLD (Blood-Oxygen-Level Dependent) signal. If each of these measures was [impossibly] perfect, then the highest possible correlation would be sqrt(.8 * .7), or .74.

This observation prompted the authors to conduct a meta-analysis of the literature. They identified 54 papers that met their criteria for fMRI studies reporting correlations between the BOLD response in a particular brain region and some social/emotional/personality measure. In most cases, the Methods sections did not provide enough detail about the statistical procedures used to obtain these correlations. Therefore, a questionnaire was devised and sent to the corresponding authors of all 54 papers:
APPENDIX 1: fMRI Survey Question Text

Would you please be so kind as to answer a few very quick questions about the analysis that produced, i.e., the correlations on page XX. We expect this will just take you a minute or two at most.

To make this as quick as possible, we have framed these as multiple choice questions and listed the more common analysis procedures as options, but if you did something different, we'd be obliged if you would describe what you actually did.

The data plotted reflect the percent signal change or difference in parameter estimates (according to some contrast) of...

1. ...the average of a number of voxels.
2. ...one peak voxel that was most significant according to some functional measure.
3. ...something else?

etc.....

Thank you very much for giving us this information so that we can describe your study accurately in our review.
They received 51 replies. Did these authors suspect the final product could put some of their publications in such a negative light?

SpongeBob: What if Squidward’s right? What if the award is a phony? Does this mean my whole body of work is meaningless?

After providing a nice overview of fMRI analysis procedures (beginning on page 6 of the preprint), Vul et al. present the results of the survey, and then explain the problems associated with the use of non-independent analysis methods.
...23 [papers] reported a correlation between behavior and one peak voxel; 29 reported the mean of a number of voxels. ... Of the 45 studies that used functional constraints to choose voxels (either for averaging, or for finding the ‘peak’ voxel), 10 said they used functional measures defined within a given subject, 28 used the across-subject correlation to find voxels, and 7 did something else. All of the studies using functional constraints used the same data to select voxels, and then to measure the correlation. Notably, 54% of the surveyed studies selected voxels based on a correlation with the behavioral individual-differences measure, and then used those same data to compute a correlation within that subset of voxels.
Therefore, for these 28 papers, voxels were selected because they correlated highly with the behavioral measure of interest. Using simulations, Vul et al. demonstrate that this glaring "non-independence error" can produce significant correlations out of noise!
This analysis distorts the results by selecting noise exhibiting the effect being searched for, and any measures obtained from such a non-independent analysis are biased and untrustworthy (for a formal discussion see Vul & Kanwisher, in press, PDF).
And the problem is magnified in correlations that used activity in one peak voxel (out of a grand total of between 40,000 and 500,000 voxels in the entire brain) instead of a cluster of voxels that passed a statistical threshold. Papers that used non-independent analyses were much more likely to report implausibly high correlations, as illustrated in the figure below.


Figure 5 (Vul et al., 2009). The histogram of the correlations values from the studies we surveyed, color-coded by whether or not the article used non-independent analyses. Correlations coded in green correspond to those that were achieved with independent analyses, avoiding the bias described in this paper. However, those in red correspond to the 54% of articles surveyed that reported conducting non-independent analyses – these correlation values are certain to be inflated. Entries in orange arise from papers whose authors chose not to respond to our survey.

Not so coincidentally, some of these same papers have been flagged (or flogged) in this very blog. The Neurocritic's very first post 2.94 yrs ago, Men are Torturers, Women are Nurturers..., complained about the overblown conclusions and misleading press coverage of a particular paper (Singer et al., 2006), as well as its methodology:
And don't get me started on their methodology -- a priori regions of interest (ROIs) for pain-related empathy in fronto-insular cortex and anterior cingulate cortex (like the relationship between those brain regions and "pain-related empathy" are well-established!) -- and on their pink-and-blue color-coded tables!
Not necessarily the most sophisticated deconstruction of analytic techniques, but it was the first...and it did question how the regions of interest were selected. And of course how the data were interpreted and presented in the press.
SUMMARY from The Neurocritic : Ummm, it's nice they can generalize from 16 male undergrads to the evolution of sex differences that are universally valid in all societies.

As you can tell, this one really bothers me...
And what are the conclusions of Vul et al.?
To sum up, then, we are led to conclude that a disturbingly large, and quite prominent, segment of social neuroscience research is using seriously defective research methods and producing a profusion of numbers that should not be believed.
Finally, they call upon the authors to re-analyze their data and correct the scientific record.



Footnotes

1 Kanwisher was elected to the prestigious National Academy of Sciences in 2005.

2 The authors note that the problems are probably not unique to neuroimaging papers in this particular subfield, however.

References

Eisenberger NI, Lieberman MD, Williams KD. (2003). Does rejection hurt? An FMRI study of social exclusion. Science 302:290-2.

Singer T, Seymour B, O'Doherty J, Kaube H, Dolan RJ, Frith CD. (2004). Empathy for pain involves the affective but not sensory components of pain. Science 303:1157-62.

Singer T, Seymour B, O'doherty JP, Stephan KE, Dolan RJ, Frith CD. (2006) Empathic neural responses are modulated by the perceived fairness of others. Nature 439:466-9.

Edward Vul, Christine Harris, Piotr Winkielman, & Harold Pashler (2009). Voodoo Correlations in Social Neuroscience. Perspectives on Psychological Science, in press. PDF

Vul E, Kanwisher N. (in press). Begging the question: The non-independence error in fMRI data analysis. To appear in Hanson, S. & Bunzl, M (Eds.), Foundations and Philosophy for Neuroimaging. PDF

Subscribe to Post Comments [Atom]

Wednesday, December 08, 2010

Voodoo Correlations: Two Years Later



The end of 2008 brought us the tabloid headline, Scan Scandal Hits Social Neuroscience. As initially reported by Mind Hacks, a new "bombshell of a paper" (Vul et al., 2009) questioned the implausibly high correlations observed in some fMRI studies in Social Neuroscience. A new look at the analytic methods revealed that over half of the sampled papers used faulty techniques to obtain their results.

-from Voodoo Correlations in Social Neuroscience, by The Neurocritic

The paper by Vul, Harris, Winkielman, and Pashler made its initial appearance on Ed Vul's website once it was accepted for publication by Perspectives in Psychological Sciences. Originally titled "Voodoo Correlations in Social Neuroscience", it was eventually renamed the more globally palatable "Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition" at the request of the editor (after much consternation from the criticized authors). This paper sparked intense debate in the field of functional neuroimaging, much of which occurred in the blogosphere (at least initially).1




Are blogs good or bad for the enterprise of scientific peer review? At present, the system relies on anonymous referees to provide "unbiased" opinions of a paper's (or grant's) merits. For today, the discussion will focus on peer review of papers in scientific journals.

-from The Voodoo of Peer Review, by The Neurocritic
Many of the aggrieved researchers in the neuroimaging community were appalled that bloggers were discussing Vul's accepted paper before it was "properly" published (and before they had time to comment themselves). But two research groups quickly issued replies:
Two rebuttals were released online shortly thereafter: one by Jabbi et al. (PDF) and an invited reply by Lieberman et al. (PDF).

What's the problem here? It's that bloggers were writing about it! That authors and anonymous commenters somehow sullied their ideological purity by entering the free-wheeling, fast-moving world of the blogosphere. But in the modern era, why wait 5 months for a paper to be "officially" published before you're allowed to discuss it? And despite what the critics of Voodoo say, Vul et al.'s paper was not plastered all over the popular press (unlike many of the Voodoo findings themselves)...

The only other mainstream media exposure has been from Sharon Begley of Newsweek, who covered the issue in her blog (i.e., The 'Voodoo' Science of Brain Imaging and More on Brain Voodoo) and in one of her magazine columns. But many are dubious. According to Seed [That Voodoo That Scientists Do]:

Two groups of neuroimaging scientists, shocked by the speed with which this paper was being publicly disseminated, wrote rebuttals and posted them in the comments section of several blogs, including Begley's. Vul followed up in kind, linking to a rebuttal of the rebuttals in the comment sections of several blogs. This kind of scientific discourse — which typically takes place in the front matter of scholarly journals or over the course of several conferences — developed at a breakneck pace, months before the findings were officially published, and among the usual chaos of blog comments: inane banter, tangents, and valid opinions from the greater public.
The usual chaos of blog comments? Hello?? How about anonymous referees for journals? Are they never ever guilty of reviews filled with inane banter and tangents? We've all had exposure -- whether from our bosses, advisors, or colleagues or through our own experience -- to rude and nasty and ill-informed reviewers. And many journal editors do not rein them in. The Neurocritic has been a proponent of completely open peer review, where the identity of the authors and the reviewers is known (see Anonymous Peer Review Means Never Having to Say You're Sorry, Peer Review Trial and Debate at Nature, and Double-Blind Bind). That way, Dr. Nasty can't hide behind the shield of anonymity when making those dumb-ass comments.

-from The Voodoo of Peer Review, by The Neurocritic

This issue is relevant again today because of the fallout over the infamous arsenic paper (Wolfe-Simon et al., 2010), which claimed it had isolated a bacterium that can substitute arsenic for phosphorus. Publication of the paper was preceded by a Sphinxlike press release: "NASA will hold a news conference at 2 p.m. EST on Thursday, Dec. 2, to discuss an astrobiology finding that will impact the search for evidence of extraterrestrial life." After the paper appeared online in Science, negative reactions from qualified and prominent scientists were swift. One of the most visible (and withering) critiques was written by Dr. Rosie Redfield, a microbiologist at the University of British Columbia:

Arsenic-associated bacteria (NASA's claims)

Here's a detailed review of the new paper from NASA claiming to have isolated a bacterium that substitutes arsenic for phosphorus on its macromolecules and metabolites. ... Basically, it doesn't present ANY convincing evidence that arsenic has been incorporated into DNA (or any other biological molecule).

. . .

Bottom line: Lots of flim-flam, but very little reliable information. The mass spec measurements may be very well done (I lack expertise here), but their value is severely compromised by the poor quality of the inputs. If this data was presented by a PhD student at their committee meeting, I'd send them back to the bench to do more cleanup and controls.
CBC News covered the critical backlash and NASA's reply, which was anti-blog:

NASA's arsenic microbe science slammed

. . .

Debate shouldn't be in media: NASA

When NASA spokesman Dwayne Brown was asked about public criticisms of the paper in the blogosphere, he noted that the article was peer-reviewed and published in one of the most prestigious scientific journals. He added that Wolfe-Simon will not be responding to individual criticisms, as the agency doesn't feel it is appropriate to debate the science using the media and bloggers. Instead, it believes that should be done in scientific publications.

Redfield said the reason she posted the review on her blog is partly because scientific publications such as Science — and the debates therein — are typically behind a paywall and inaccessible to the public.

"I blog openly…to bring this stuff more into the open where everybody can see it," she said.

Redfield has now posted a draft of her official letter to Science.

For full coverage of the matter, I recommend Is That Arsenic-Loving Bug — Formerly an Alien — a Dog? and The Wrong Stuff: NASA Dismisses Arsenic Critique Because Critical Priest Not Standing on Altar by David Dobbs, "This Paper Should Not Have Been Published" by Carl Zimmer, and An arsenic bacteria postmortem: NASA responds, tries to pit blogs vs. “credible media organizations” by Ivan Oransky (for starters).


Returning now to Voodoo Correlations... In the November 2010 issue of Perspectives in Psychological Sciences (online December 7, 2010), outgoing editor Ed Diener has assembled an fMRI Special Section looking back at those heady days and forward into the future:


Neuroimaging: Voodoo, New Phrenology, or Scientific Breakthrough? Introduction to Special Section on fMRI

Ed Diener

In response to the widespread interest following the publication of Vul et al (2009), Perspectives Editor Ed Diener invited researchers to contribute articles for a special section on fMRI, discussing the promises and issues facing neuroimaging.


Mistreating Psychology in the Decades of the Brain

Gregory A. Miller

Scientists tend to consider psychology-biology relationships in two distinct ways: by assuming that psychological phenomena can be fully explained in terms of biological events and by treating them as if they exist in separate realms. These approaches hold up scientific progress and have important implications for clinical practice and policy decisions (e.g., allocating research funds).


Brain Imaging, Cognitive Processes, and Brain Networks

Brian D. Gonsalves and Neal J. Cohen

The growth of neuroimaging research has led to reflection on what those techniques can actually tell us about cognitive processes. When used in combination with other cognitive neuroscience methods, neuroimaging has promise for making important advancements. For example, neuroimaging studies on memory have raised questions not only about the regions involved with memory but also about component cognitive processes (e.g., the role of different attention subsystems in memory retrieval), and this has resulted in more theorizing about the interactions of memory and attention.


Mapping Mental Function to Brain Structure: How Can Cognitive Neuroimaging Succeed?

Russell A. Poldrack

To understand the anatomy of mental functions, researchers may to need to move away from commonly used brain mapping strategies and begin searching for selective associations. This will require more emphasis on the structure of cognitive processes, which may be achieved through development of formal ontologies (e.g., the Cognitive Atlas) that will describe the "parts" and processes of the mind. Using these ontologies in combination with large-scale data mining approaches may more directly relate mental processes and brain function.


The Appeal of the Brain in the Popular Press

Diane M. Beck

Why do people like the brain so much? Brain-related articles in the press, especially ones about fMRI research, tend to be very popular with the general public, but many of these articles may result in misinterpretations of the science. Part of the popularity may be attributed to their deceptively simple message: Perform an action and a certain area lights up. In addition, people are more confident in "biological" images than in the behavioral phenomena on which the images are based. In order to maintain trust with the public, scientists have a responsibility to provide the press with descriptions of research and interpretations of results research that are clear, relevant, and scientifically accurate.


Frontiers in Human Neuroscience: The Golden Triangle and Beyond

Jean Decety and John Cacioppo

The development of neuroimaging has created an opportunity to address old questions about brain function and behavior in new ways and also to uncover new questions. The knowledge that emerges from neuroimaging studies is more likely to be beneficial when combined with techniques and analyses that break down complex constructs into structures and processes, measures that gauge neural events across different times, and animal studies.


Bridging Psychological and Biological Science: The Good, Bad, and Ugly

Arthur P. Shimamura

The advent of functional neuroimaging has brought both praise and criticism to the field of psychological science. Although most studies relying on fMRI are correlative, they do offer some clues about the biology underlying psychological processes. However, it is not sufficient to show which area of the brain is involved in a particular cognitive process; rather theories need to address "how?" questions (e.g., How does the hippocampus contribute to remembering?) in order to best bridge psychological and biological science.



Footnote

1 Also see Mind Hacks, BPS Research Digest, and Neuroskeptic.

References

Vul E, Harris C, Winkielman P, Pashler H (2009). Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition. Perspectives on Psychological Science 4(3), 274-290.

Wolfe-Simon F, Blum JS, Kulp TR, Gordon GW, Hoeft SE, Pett-Ridge J, Stolz JF, Webb SM, Weber PK, Davies PC, Anbar AD, & Oremland RS (2010). A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus. Science.


Thanks to Sandra of Channel N for alerting me to the special issue.

Subscribe to Post Comments [Atom]

eXTReMe Tracker