4 ... And They Don't Understand Statistics, Either
If we say a study found a "statistically significant" link between the use of feather pillows and brain cancer, what do you think that means? It means the scientist found something that you'd better damned well pay attention to, right?
Jack Hollingsworth/Photodisc/Getty Images
Put down your Spirographs and pay attention! Science is talking!
"Statistical significance" is just the fancy name for what happens when you see a relationship between two variables that probably isn't due to random chance. A hell of a lot of scientific research involves investigating relationships in statistics, like whether a certain drug has a correlation with getting cancer, for example. The problem is that, in this context, "significant" doesn't necessarily mean "important." For instance, there is a statistically significant link between ice cream consumption and murder rate. But before you start burning ice cream vans, this is just a confusion between correlation and causation -- ice cream consumption and murder both just happen to increase in the summer.
Luis Carlos Torres/iStock/Getty Images
Which both happen to coincide with all the good TV shows going on hiatus.
If you didn't know about how weak a "statistically significant" finding is, then don't worry -- neither do scientists. When they find a link between sleepiness and vitamin D, or whole fruits and decreased risk of type 2 diabetes, they call it "significant" and, more times than not, end up exaggerating the hell out of their claims. The media ends up reporting it inaccurately because the researchers don't include the proper caveats. One statistician took a look and found that "eight or nine of every 10 articles published in the leading journals" make the massive error of equating statistical significance to importance.
As an example, one recently published study purported to have found a link between walnuts and a drop in diabetes risk. How'd they discover that? Well, by tracking a whole bunch of nurses, looking at their walnut consumption, and seeing which ones developed diabetes. To the layperson, and by extension the media, this sounds like a pretty cut-and-dry way of studying the phenomenon. But think about that -- did they look at other factors, like whether people who ate fewer nuts also tended to go home at night and eat a whole tub of butterscotch ice cream? Nope -- they just asked the participants how often they ate walnuts and used the answers as the basis of their conclusions.
Additionally, squirrels generally make for a poor control group.
By the same token, we could investigate the link between Apple products and hipster mustaches to conclude that iPhones somehow stimulate hair growth on the upper lip. Or as this sarcastic study pointed out, that you can statistically prove that listening to certain music makes you younger.
But that just brings us to another point ...
3 Scientists Have Nearly Unlimited Room to Manipulate Data
Top Photo Corporation/Top Photo Group/Getty Images
When you set out to test something, like if you're trying to figure out if wolf bites have a statistical link to werewolfism, a whole lot of your results are decided before you even start. As was pointed out in the intentionally silly study we mentioned above, in any experiment the scientist gets to decide which things to compare (What about other animal bites?), how long to collect the data (Would you get different results six months from now?), which data to include (Are you accounting for the subjects' age? Diet? Ethnicity? Phase of the moon under which they were bitten?), and on and on -- countless little choices about what to include and, more importantly, not include in the study.
Stuart Wilson/Getty Images Entertainment/Getty Images
What about werewolves in will-they-won't-they romances that are
abruptly resolved with vague implications of pedophilia?
So, for example, here's an experiment you can try on your own: Look up some psychological studies on the Internet. Every time the participants are anything other than college students, take a drink. Congratulations! You're probably still as sober as a Mormon priest. That's because when psychology professors are looking for test subjects, they have the overwhelming tendency to use the large pool of students they see staggering around on campus. It's just so much easier than going out into the world and actually rounding up a cross-section of random folk (and law enforcement frowns on going out in a van and just snatching them in the dead of night).
That means a whole lot of behavioral science is centered around studies done in First-World universities, and those studies fall prey to the assumption that their young, relatively healthy, sedentary, economically privileged, and mostly white test subjects are in any way indicative of the people who make up the population as a whole (i.e. the other 99.7 percent of the world's inhabitants).
"Fine, you can throw a woman in there to even things out. But only one."
Unfortunately, a study illuminating the psychology of an average college student, whose primary philosophy is to have sex with that redhead in Anthro 101 and who lives on ramen noodles and liquor, may not be applicable to even, say, a poor single mom living three blocks away from the university. So behavioral science has developed tunnel-vision on the richest, most open-minded fraction of educated young people in the world, and assumes whatever answers it finds can be generalized to anyone else.
But hey, scientists are people, after all, and they study what they know. Funny, then, that a fairly popular avenue of research involves participant observation of strip clubs. As in, scientists receiving grant money to sit and watch strippers pole dance. You know, for science.
Joe Raedle/Getty Images News/Getty Images
It's said that Einstein did some of his best work while getting a lap dance to "Pour Some Sugar on Me."