The most sophisticated, widely adopted, and important tool for looking at living brain activity actually does no such thing. Called functional magnetic resonance imaging, what it really does is scan for the magnetic signatures of oxygen-rich blood. Blood indicates that the brain is doing something, but it’s not a direct measure of brain activity.
Which is to say, there’s room for error. That’s why neuroscientists use special statistics to filter out noise in their fMRIs, verifying that the shaded blobs they see pulsing across their computer screens actually relate to blood flowing through the brain. If those filters don’t work, an fMRI scan is about as useful at detecting neuronal activity as your dad’s “brain sucking alien” hand trick. And a new paper suggests that might actually be the case for thousands of fMRI studies over the past 15 years.
The paper, published June 29 in the Proceedings of the National Academy of Science, threw 40,000 fMRI studies done over the past 15 years into question. But many neuroscientists—including the study’s whistleblowing authors—are now saying the negative attention is overblown.
Neuroscience has long struggled over just how useful fMRI data is at showing brain function. “In the early days these fMRI signals were very small, buried in a huge amount of noise,” says Elizabeth Hillman, a biomedical engineer at the Zuckerman Institute at Columbia University. A lot of this noise is literal: noise from the scanner, noise from the electrical components, noise from the person’s body as it breathes and pumps blood.
Then there’s noise from inside the person’s brain. “You sit there hooked up to this machine and the scientists ask you to do simple tests like tap your fingers,” says Hillman. “But you aren’t just tapping your fingers, you’re sitting there thinking about being in a machine and all these other things.”
And mixed up in all this noise, the magnetic signal the fMRI is looking for is relatively weak. So, researchers use statistical software to help them separate the signal from the noise. And when these malfunction, they lead to false positives: indications of brain activity when none exist. (Several years ago, faulty statistics caused a machine to pick up neurological activity from a dead fish.) A false positive in an fMRI is a voxel of brain activity that is actually not occurring. You expect a certain number of these when you’re dealing with something as flighty and variable as blood in the brain. But if you get false positives more than 5 percent of the time, the study is bunk.
That’s where the new study found a problem. This goes back to one of the underlying theories in fMRI analysis: If one voxel in the 3-D brain scan is showing activity, assume the adjacent voxels are likely to be as well. Statistical software estimates roughly how likely it is that those adjacent voxels are actually active. The study’s authors found that some of those software packages had bugs that overestimated the similarity in adjacent voxels. By overestimating the likelihood of similar activity, images would indicate larger-than-reality clusters of brain activity.
Really overestimating. When the researchers used statistics packages to compare fMRI data from 499 individuals—done in groups of 20, from control groups gleaned from studies around the world, the error rate jumped to 70 percent. “If I compare 20 healthy controls to another 20 healthy controls, there should be no difference,” says Anders Eklund, a biomedical engineer at Linköping University in Sweden.
The bug in the statistical packages the paper calls out was fixed in 2015—while Eklund and his co-author Thomas Nichols, a neuroimaging statistician, were still running their analysis. But since these statistical methods have been used for years, the paper’s abstract high-balled that as many as 40,000 papers could have been affected by the bug.
This week, though, Nichols revised that number down to a maximum of 3,500 in a blog. “I almost regret how we put the summary in the paper,” he says. The revised number, Nichols explains, represents papers that sit right on the line of statistical validation.
That still sounds like a lot of papers, but other researchers played down the hype. “No one in the community who knows what they are doing are really phased by this at all,” says Peter Bandettini, chief of brain imaging at the National Institute of Mental Health. “Only the most tenuous and over-interpreted results would perhaps change with this test.” Bandettini points out that any papers containing such a high error rate would have been toeing the line of statistical significance anyway, and would be viewed suspiciously by the neuroscience community at large.
Still, most agree that neuroscience needs to shore up the way it treats fMRI data. “Brain imagery has this tradition of showing a picture, but the data underlying that image is never shared,” says Nichols. This means outside researchers cannot verify whether the voxels shown in a brain image were statistically valid or not. Or at least, that’s how it has been in the past. Eklund and Nichols have started petitioning journal editors to change submission guidelines, so that new papers are required to include their statistical evaluations.
“Frankly, this is the only modality we have right now that can give us a view of the working human brain,” says Hillman. Better to know the brain is doing something than knowing nothing at all.