5 Everyday Things You See Online (That Are Brainwashing You)
If movies are any indication of how the real world works, your supervillain plan for world domination needs to have at least 20 steps and somehow involve the Moon. If it isn't circuitously complicated, why even bother? But in reality, a true supervillain doesn't need access to the world's gold supply or an arsenal of nukes to make the planet go his or her way. Our next evil mastermind simply needs a basic understanding of how the rest of us use the internet.
The Upvote Mechanism Makes It Easy For A Few People To Control A Large Group
With a wealth of information at the fingertips of anyone owning an internet-connected device, you would think we would be the most informed mamajammas to have ever walked the Earth. And you'd be wrong! Yes, we have a lot of information to explore, but that information is so curated that it might as well not be there. And one way we curate our own exposure to ideas is by upvoting, or "liking," other people's posts.
You know the drill: On sites like Reddit or Yelp, comments and posts rise to the top based on votes. The problem is that we aren't totally honest with ourselves when it comes to how we vote. Let's say you're the first person to read a Reddit post. You want to give the writer a quick nod of encouragement for speaking his or her mind, so you give it an upvote. Whoever encounters the post next is going to respond to the content PLUS the vote you just gave. If your response was positive, they're more likely to vote positively as well. And then the next person, who sees two positive responses, feels compelled to agree with the two people who clearly saw something special in the post. Several thousand upvotes later, we've got content on the front page of Reddit.
Racial politics and failed military takeovers are only slightly more susceptible to groupthink than Pokemon made of foliage.
Sure, some of us are reading every word of every post and thoughtfully weighing the pros and cons of an upvote before committing to a click, but most of us aren't. One team of researchers found that by giving a post a single upvote, the post's overall performance increased by 25 percent. And on a site that has 36 million registered accounts, a 25-percent boost carries serious weight.
Some Reddit users have picked up on this process and have used it to their advantage, manipulating public perception in order to have their competitors banned or raise themselves to prominence on the site.
"I'd like to thank all the me who helped make this artificial, fleeting fame possible."
And while the Reddit community takes it upon themselves to hunt down and ban vote manipulators, all the monitors in the world can't change the fact that we're a bunch of sheep in the first place.
Social Media Algorithms Keep Us In The Dark (And Could Control Our Decisions)
Speaking of herd mentality, social media sites are perfectly designed to ensure that you are surrounded with your very own echo chamber. Never again will you have to suffer from differing opinions or tastes -- Facebook will ensure that you only see posts that are tailored to your exact liking.
Eventually, every post will be a selfie, all of them featuring you.
Social media algorithms are largely influenced by what you click on, running with that data and feeding back similar posts, keeping you locked in a never-ending cycle of similar stories, propelling you deeper and deeper into your single-minded way of thinking.
This becomes dangerous when sites rely on their own skewed opinions to determine what ends up on their feed, removing users' ability to make fully informed opinions. Like when Facebook intentionally held back conservative news stories, skewing the site in a decidedly liberal direction. This could have disastrous results, especially in the case of a super-close election in which undecided voters make up their minds based on what they believe to be fair and unbiased research.
Do you take your bigotry straight-up, or sweetened with sugary bullshit?
If you still don't believe any of this has the potential to affect any sort of real change, check out what happens when anti-vaxxers, a fairly small movement compared to those who support the practice of vaccination, use social media to their advantage. In 2014, the hashtag #cdcwhistleblower exploded, seemingly bringing to light the thousands of tweeters who believed the CDC was withholding information concerning the link between MMR vaccines and autism in African-American kids. Celebrities jumped on the bandwagon, and the CDC issued a response to the deluge of tweets.
You're not truly woke until you talk out of your ass on and off camera.
In reality, the vast majority of tweets bearing the conspiracy-driven hashtag came from a mere 10 anti-vaxxer accounts, but the damage was done. The high number of tweets fooled the algorithm into deeming the topic trendy, procuring the tag a spot on people's feeds. This reinforcement is a perfect breeding ground for the viral spread of misinformation, given the tendency for users to follow others with similar beliefs and the algorithms' habit of perpetuating those viewpoints indefinitely. Every post we see, whether it is accurate or not, is likely to be one we agree with, allowing false information to become truth in our minds -- truth we inevitably share like false prophets and/or former male gigolos and pet detectives.
Algorithms Could Create Robot Overlords
The robot uprising is an apocalyptic possibility -- but it's going to be a lot more boring than we all expected. Right now, there is an automated computer program writing a speech for a politician, and it's getting it right. One speech read exactly like something you'd hear any day on C-SPAN:
That said, another speech wasn't quite as successful. It ended with, "For example, I mean probably all of us have had a mom or a grandmom or an uncle to whom we say, hey, I noticed your legs are swelling again. Fluid retention. Fluid retention." The point is that you should know there are automated programs that can imitate 95 percent of our elected officials' speech patterns, which means we're due for an actual politician who reads off a speech generated by an algorithm. And we'll probably vote for him or her because they say exactly the right words.
Our robot kings won't stop at rhetoric, though. The Italian government has a program that tries to catch tax fraud by monitoring how much everyone spends. Once an Italian person spends 20 percent more than they claim to earn, that person is now a candidate for fraud. After one algorithm catches you doing dirty deeds, another algorithm might set your bail.
Your only hope is that the robot glitches and calculates your million-euro bail in lira.
This may seem like the ultimate way to make major decisions while maintaining a perfectly fair and unbiased atmosphere, but of course that isn't true. Algorithms still need to be fed data, and the person doing the feeding may have an ulterior motive, or might allow their bias to seep into the information. Someone who has problems with a certain race or gender could overly scrutinize one demographic. A programmer could keep all their opinions totally to themselves and still end up with a racist algorithm because the data reflects all of their terrible, terrible thoughts. For example, an internet search for Asian women used to only return an insane amount of pornography results, and that's not because Google Search was born with a weird racial bias. We taught it that.
The Government And Corporations Can Now Control The News
Since the internet is so vast and so easily manipulated, all you need to make an impression is a piece of advertising or propaganda that looks oddly like legitimate news. What makes this sort of advertising so scary and pervasive is that reliable, well-respected news sites like CNN and The New York Times have created entire branches dedicated to these kinds of content.
"Shell: the official gasoline of the Matrix."
If you think people are too smart for this to be an issue, we've got some news for you. Apparently, our innocent doe eyes can't tell the difference, but that's probably because the majority of these blatant advertisements aren't playing by the rules, giving themselves a hugely successful edge. While you can easily avoid an article or website, promoted content is becoming more obtrusive and sneaky by showing up in places that you used to trust. For instance, Google Maps is on the verge of rolling out a feature which prioritizes sponsored businesses. In other words, your GPS will soon begin to strongly emphasize the pizza places that have an advertising deal with Google, and bury other potentially great pizza places in a sea of anonymous search results. Other companies are going as far as to send ads to your phone as notifications, to ensure that you have to pay attention to them. Indeed, this is not the future Doc Brown promised us.
This ad is somehow more intrusive than a giant holographic shark.
Aside from the obvious implications, this can have disastrous results if it falls into the wrong hands. In 2014, the National Republican Congressional Committee created websites that were made to look like impartial news blogs which reported on local politics. The reports naturally skewed in the Republicans' favor, but looked so much like genuine news sites that people were fooled.
The original headline was "John Barrow Fills His Suit Pants With Farts," but they decided that was too on the nose.
Advertising Is Becoming Close To Brainwashing
With advertising and news becoming so closely linked and so hard to avoid, maybe you'll unplug from your TV every now and again and go outside. (Thanks, Pokemon!) Except, today's ads aren't going to keep quietly running on websites and TV shows at their regularly scheduled time slots whether you see them or you don't. Ads are going to start waiting for you, only running when you turn on your TV or pick up your video game controller, lurking around the corner like an omnipresent horror movie villain, coming out when you least expect it.
How about that? Actual ethical issues in video games. Huh? *cricket sounds*
These heavily pervasive methods have already been abused to a terrifying degree by Ted Cruz. (Remember him?) By mining data from the users of his mobile app, Cruz's team created psychological profiles of his supporters to try to figure out how to best appeal to them during his bid for the presidency. Of course, it ultimately didn't work out for him, but who knows how much further along the technology will be in the next election cycle, provided there will be another election cycle after this one.
If you think you can avoid being politically pigeonholed by staying mum on the topic, think again. Today's candidates are in the "wild west" of digging into data to get their edge. And by "data," we mean shopping habits, social media accounts, browsing records -- ANYTHING that someone has recorded and stored is fair game for analysis. For example, data modeling researchers have figured out that frozen food consumers are more likely to be in the anti-abortion camp. How did they figure that out? Who knows? It's a new science. But you can bet your ass that someone out there is using your nonpolitical everyday choices to manipulate you into making their choices.
They're pro-life; just not pro-healthy-life.
One professor thinks we're only two election cycles away from politicians having the technology to create ads tailored to every voter. You read that right. Professor Scarypants isn't predicting campaign ads for niche demographics -- he's predicting ads tailored for individuals, based on the lifetime of data each individual has vomited out into the universe.
Basically, this is another way of saying that future candidates will be using your love of porn in their ads one day.
"Vote Ted Cruz, and receive a FREE copy of the porno starring that woman who looks like him."
Follow Carolyn's quest for world domination on Twitter.
Also, follow us on Facebook, and we'll follow you everywhere.