The 6 Most Statistically Full of Shit Professions
People get paid a lot of money to be experts on things, so one would assume they're much more knowledgeable than the average Joe or, at the very least, a blindfolded monkey throwing darts.
Sadly, in many cases this just isn't true, and the so called "expertise" in question amounts to little more than a shot in the goddamn dark. Here are a few cases of experts that probably shouldn't inspire as much confidence as they do.
Stock Market Experts
Many of us find the stock market too intimidating to put money into, or at least we would if we had the money to invest in the first place. How do you decide what stocks to pick? We can't even pick where to go for lunch half the time and we understand lunch.
That's when you call in a professional, or if you're not rich, you buy a pre-set package of stocks and bonds that a professional has pre-picked for you, and then sit back and, uh...
Watch your stocks grow more slowly than if you picked them at random.
Yes, as it turns out, the majority of professionally managed funds picked by stock market experts (70 to 85 percent) actually underperform the Dow or S&P indexes, which are technically supposed to represent the average performance of the market to begin with.
Results not typical.
If you do have to peddle your nest egg off to someone else, try to hand it to Warren Buffet, whose Berkshire Hathaway stocks have outperformed the index by 11.14 percent on average for over 30 years. So it's not like financial advisors can't know what to pick. They usually just don't.
But hey, there is some good news: When going up against a bunch of dudes throwing darts at a chart to randomly pick their stocks, the stock professionals performed better.
One thing we all can be sure about is that people that make their living writing about wine must be able to sniff out differences between wines much better than us plain ordinary folk.
Sure, Joe Consumer actually likes cheaper wines better, but that's because Joe Consumer is a stupid Philistine. The experts can tell the difference between a 2006 and 2007 Stag's Leap Cabernet Sauvignon in their sleep because everyone knows 2006 was a pedestrian year for Napa Valley reds.
Only civilians couldn't.
Hell, they are so good they can tell the difference between two bottles of the same wine. In one experiment, wine experts were given two bottles of the same wine, only one was labeled a "vin de table" (France's version of "Night Train") and one was labeled a "grand cru" (top-rated vineyard since 1855). Want to guess what happened?
According to the article: "Whereas the tasters found the wine from the first bottle 'simple,' 'unbalanced,' and 'weak,' they found the wine from the second 'complex,' 'balanced,' and 'full.'" Not only were their tasting skills put to shame, it didn't even occur to them that nobody buys a $40-plus bottle of wine for a university experiment.
"...this tastes like vodka and grape soda."
Not only can professional wine tasters be convinced that the same bottle of wine was both award-winning and hobo juice, but they could even be convinced that the same bottle was both red and white with the cunning use of food coloring.
That's not to say the whole idea of wine tasting is a crock- it just seems like a field where judging with one's eyes is a temptation too easy to fall into. For example, in the 1976 Judgment of Paris, French experts picked American wines as superior to their own, recoiling in horror when they found out.
Despite being the battle cry of the bad artist, it's really true that art is subjective. So we don't expect art critics to be able to tell us which art is the "best." We do expect them to at least be able to tell the difference between a Van Gogh and a Picasso, or a Vermeer and a Gary Larson.
The good news is that one of those expectations is correct.
Hans van Meegeren was an ordinary mild-mannered artist in the 1930s, who painted unimpressive portraits until one day an art critic called him "unoriginal." Determined to deliver the most ferocious professional scrotum kick in history, Meegeren hatched a daring plan to paint a completely new painting in the style of the artist Vermeer, let all the critics fawn over the newly discovered Vermeer, and then show them all for fools when he revealed he had painted it.
Sure enough, his knock-off was hailed by critics as a Vermeer masterpiece, bought for the modern equivalent of $6-million and featured as the centerpiece of a prestigious gallery exhibition. Van Meegeren, realizing he liked money, ditched the plan to reveal himself and began painting more Vermeers. After the war, he was arrested for selling "stolen" Vermeers to the Nazis.
Because even Nazis have standards.
Then in 1964, Swedish art critics were fooled into praising the works of Pierre Brassau with descriptions like "Brassau paints with powerful strokes, but also with clear determination. His brush strokes twist with furious fastidiousness. Pierre is an artist who performs with the delicacy of a ballet dancer."
Brassau's methods? He "preferred eating the paint to placing it on a canvas." Because Brassau was a fucking chimpanzee.
We've all learned from TV and movies that when a serial killer is on the loose, an attractive outside expert can come in and discover an intimate window into the killer's mind by examining the very pattern of his knife strokes.
"The body was found outside, which means our killer can't possibly be a white man."
How does a profiler pull off this magic? According to some studies, they actually don't. After analyzing studies on criminal profiling accuracy, the authors concluded that professional profilers don't show any more significant accuracy in their predictions than the control groups did by using common sense and educated guesses. Also, many profilers refuse to participate in any kind of study to verify their accuracy.
"A study, you say? Well fuck your tits, madam, I have a book to write."
Elusive as they are to study, it's hard to say for sure how good criminal profilers are. Some have certainly been less successful than others, like the FBI profilers hunting the Unabomber, who identified their suspect as a married man living in a house in the suburbs, most likely an airplane mechanic. He was finally arrested in 1996 at his remote cabin where he had been living as a wild-haired, crazy, mountain man for 25 years.
Not pictured: a wife, an airplane.
Many self-proclaimed criminal profiling experts also shoved their faces into the media spotlight during the Washington Beltway sniper attacks to peg the randomly murdering snipers as a couple of white guys. "The experts were neither misogynists nor racists. They all agreed with Van Zandt that 'this is something white males do.'"
They slipped back into the shadows when John Allen Muhammad and Lee Boyd Malvo, two black men, were arrested and ultimately convicted for the killings.
While it's long been a running joke that TV weather forecasters are hired for their good looks or entertainment value, we assume that someone in the back room is feeding them accurate information so they can at least read the weather.
A bored and curious gentleman in Kansas City with a penchant for statistical analysis decided to explore this assumption one day. He tracked the predictions of four local stations over 220 days and found that the four stations had about an 85 percent success rate in predicting if it would rain the next day, which looks pretty good at first glance.
But, here's the thing: It doesn't rain on most days. It's not a 50/50 thing. In most parts of the country it only rains on about 14 percent of the days.
OK, so suppose you went on the air and just predicted every single day that it won't rain. In this 220 day study the four TV stations went through, you'd beat their average accuracy rate (since your "it won't rain" prediction is right 86.3 percent of the time).
Two out of the four stations barely beat you (they got 87 percent) while the other two fell below the threshold.
The study then narrowed it down to only debatable days, eliminating days when it clearly wasn't going to rain--basically boiling it down to days people would actually care about the forecast. He lowered his threshold to 50 percent, the equivalent of flipping a coin when it's cloudy to predict whether it will rain tomorrow, and the news stations again barely managed to defeat the inanimate object, ranging between 50 percent and 60 percent accuracy.
Seriously, anyone can do this.
By the time the test was adjusted to predict the weather three days out, the coin was winning in all cases.
With all that said, this guy's conclusion isn't that meteorology is untrustworthy, but rather that local TV weather forecasting places too much emphasis on good hair and terrible jokes and not enough on smaller details, such as when it is going to rain.
Millions of guys would love to spend all their time watching games and telling people their opinions about sports, but only a select few get to do it, and they do so partly by keeping up a pretense of having some exclusive knowledge about the game that no one else does.
Any sports fan will tell you what a retarded hack their hometown sports columnist is, but sports fans (as with fans of anything, really) tend to be just as lazy as they are abusive, and not many compile a statistical analysis of their hated sportswriters' inaccuracies.
One man, however, did take it upon himself to prove the point empirically in 1971 with an actual study on sportswriters' ability to predict college and NFL games. Their success rate was .476, which you may notice is slightly worse than a coin. The coin's writing ability is arguably superior.
Before writing sports journalists off as complete morons, keep in mind that even Accuscore, a service that charges for its sports predictions based on complex computer algorithms that crunch stats and predict trends, only claims about 53 to 54 percent accuracy, which is still enough to make its customers money.
So, sports prediction is something that almost nobody can get a handle on, but still... worse than a coin toss?
If you want to tie your brain in a knot, think about this: If those guys sitting behind the desk at ESPN are performing worse than chance when they try to make an "expert" judgment about who's going to win the game, that means they could improve their accuracy by always betting on the team they actually think is going to lose. Hell, some of them are wrong so often they could beat the Accuscore service simply by going against their instincts every time.
Eh, they'd still probably fuck it up somehow.
Do you have something funny to say about a random topic? You could be on the front page of Cracked.com tomorrow. Go here and find out how to create a Topic Page.
And check out why you should never trust advertisements, in The 10 Most Laughably Misleading Ads and The 5 Creepiest Advertising Techniques of the (Near) Future.
And stop by our Top Picks (Updated 1.26.2010) brought to you by an "Internet Expert" (read: guy who looks at porn all day).
And don't forget to follow us on Facebook and Twitter to get dick jokes sent straight to your news feed.