Even if you're not all that into science, it's still a big part of the news that reaches you on a day-to-day basis -- you'll see interesting headlines about how studies show marijuana cures loneliness or how other studies say pot ruins your memory, and you kind of just assume they're true. If scientists say eating Cheerios will lower your cholesterol, you feel better about buying them. After all, if you can't trust scientists, who can you trust?
But one thing we've learned, as a site that likes to publish reader-friendly science articles that link to these studies, is how much of the stuff that comes our way is, well, worthless. As it turns out, the problem is ...
Ingram Publishing/Ingram Publishing/Getty Images
For all the advice they keep throwing at us from the field of medical science -- eat less meat, or more fiber, or fuck fewer drainpipes -- it may seem like medical research is moving forward at a breakneck pace. But then, why do so many health-food crazes seem to disappear as quickly as they arrived? Why do we still not know whether vitamin C can help cure your cold? Frighteningly enough, it's because most medical research is bullshit.
Anna Lurye/iStock/Getty Images
You had the power to get a boner inside you all along!
Don't take our word for it -- listen to Dr. John Ioannidis and his team of celebrated meta-researchers. Their job is to comb through all these awesome-sounding medical studies to assess their validity. And, surprise: Up to 90 percent are critically flawed in some way or another. If you're hoping this is contained to fringe, little-used research, find a new vessel for your misplaced hope. Ioannidis and his crew examined 49 of the most highly regarded medical findings in the last decade or so -- between a third and a half of them where straight-up wrong or highly exaggerated.
That's why all the stories about "promising new research" tend to come and go with the speed of Disney pop stars ("Take Omega-3 to prevent heart disease! Or don't! Who the fuck knows?"). And it isn't just the stuff that turns up on BuzzFeed: We're talking about the body of knowledge that your doctor relies on when prescribing drugs, giving advice on dietary habits, and recommending surgery, among other things.
This is why all operating rooms now have a live feed of trending Twitter hash tags.
How can so many studies be so badly flawed? Well ...
Jose Luis Pelaez Inc/Blend Images/Getty Images
Science tends to require the use of numbers. And while most of us probably have a tough time figuring out what all those numbers and letters and Greek symbols in algebra equations are supposed to mean, we're content to leave it to the experts to do all the understanding for us. Man, it would be hilariously terrifying if those experts turned out to be as clueless as the rest of us, wouldn't it?
If God wanted us to know trigonometry, he wouldn't have given us calculators.
Enter Kimmo Eriksson, a Swedish mathematician. He decided midway through his career that pure math wasn't doing it for him anymore and moved into cultural studies. It was at that point he realized his new colleagues were basically awful at math. So he conducted an experiment to find out how widespread the issue was. Eriksson picked two research papers at random and sent them out to a bunch of scientists. In half of the papers he randomly added an equation that had nothing to do with the study whatsoever, and in context was utter nonsense.
Eriksson asked the recipients to judge the quality of the research. The mathematicians and physicists were basically unimpressed, but in every other field the inclusion of the equation got the papers a higher ranking, even though it was pointless bullshit -- it just looked more impressive with the complicated math in there. More than 60 percent of the medical researchers, the people trying to save all of our lives, ranked the junk papers as better on the grounds of, "It must be right -- look at all this awesome math shit he's got in there!"
The research by Eriksson (or "Kimmo the number wizard," as he is known in the humanities) is not the only evidence that scientists treat math as some mysterious occult force. Research into ecology and evolution show that papers are 28 percent less likely to be cited for every additional equation per page. It seems that basically everyone that isn't a physicist or engineer treats math with a policy of "run away as quickly as possible."
If we say a study found a "statistically significant" link between the use of feather pillows and brain cancer, what do you think that means? It means the scientist found something that you'd better damned well pay attention to, right?
Jack Hollingsworth/Photodisc/Getty Images
Put down your Spirographs and pay attention! Science is talking!
"Statistical significance" is just the fancy name for what happens when you see a relationship between two variables that probably isn't due to random chance. A hell of a lot of scientific research involves investigating relationships in statistics, like whether a certain drug has a correlation with getting cancer, for example. The problem is that, in this context, "significant" doesn't necessarily mean "important." For instance, there is a statistically significant link between ice cream consumption and murder rate. But before you start burning ice cream vans, this is just a confusion between correlation and causation -- ice cream consumption and murder both just happen to increase in the summer.
Luis Carlos Torres/iStock/Getty Images
Which both happen to coincide with all the good TV shows going on hiatus.
If you didn't know about how weak a "statistically significant" finding is, then don't worry -- neither do scientists. When they find a link between sleepiness and vitamin D, or whole fruits and decreased risk of type 2 diabetes, they call it "significant" and, more times than not, end up exaggerating the hell out of their claims. The media ends up reporting it inaccurately because the researchers don't include the proper caveats. One statistician took a look and found that "eight or nine of every 10 articles published in the leading journals" make the massive error of equating statistical significance to importance.
As an example, one recently published study purported to have found a link between walnuts and a drop in diabetes risk. How'd they discover that? Well, by tracking a whole bunch of nurses, looking at their walnut consumption, and seeing which ones developed diabetes. To the layperson, and by extension the media, this sounds like a pretty cut-and-dry way of studying the phenomenon. But think about that -- did they look at other factors, like whether people who ate fewer nuts also tended to go home at night and eat a whole tub of butterscotch ice cream? Nope -- they just asked the participants how often they ate walnuts and used the answers as the basis of their conclusions.
Additionally, squirrels generally make for a poor control group.
By the same token, we could investigate the link between Apple products and hipster mustaches to conclude that iPhones somehow stimulate hair growth on the upper lip. Or as this sarcastic study pointed out, that you can statistically prove that listening to certain music makes you younger.
But that just brings us to another point ...