Login or Register

Sign in with Facebook

Skynet. That's one of the first places our minds go when we think of artificial intelligence. Even Google, which has heavily invested in AI, is developing an off switch in case a futuristic AI-enabled blender goes berserk and turns a family into a protein shake. There is a real subtext of creepiness to every new development, even when the methods used to make these advancements sound like the ideas of stoned 13-year-olds. There are so many labs making AI programs which do stupid, embarrassing things that artificially intelligent beings of the future will look back on with the same shame you do when you see pictures of yourself as a kid.


Your hair scared away sexy, which is why you wrote a song celebrating its triumphant return.

Exactly what kinds of dumb but disconcerting things are we teaching AI to do? Well, how about ...

5
Making AI Watch Desperate Housewives And YouTube Videos

In most cases, researchers aren't teaching AI to do something. They're teaching it how to teach itself to do something. Like how researchers at MIT are making an algorithm watch hundreds of hours of YouTube videos and shows like The Office, The Big Bang Theory, and Desperate Housewives (among others) in an apparent attempt to convince it that it's unemployed but still has its ex-girlfriend's Netflix login. Their goal is to get the AI to accurately predict how people will greet each other one second before it happens.

By making the algorithm watch 600 hours of footage, all the researchers are doing is turning it into the friend who's undeservedly proud when their predictions of an easily-predictable movie come true. "Those two dudes look like they're going to shake han-AHH THEY DID IT, TOLD YOU!" That AI thinks it's so smart because people on TV aren't wildcards disrupting the world of greetings by smearing pudding on each other's faces or slapping together baby crocodiles. We shake hands, we kiss, we high-five, we hug. If you're doing anything else, you're overthinking it.

bymuratdeniz/iStock/Getty Images
"BACK OFF, ALLEN. NO ONE DOES THAT."

Congrats, MIT's TV-watching algorithm. You've learned how to choose one thing from a list of four! Keep this up, and soon you'll be able to read a Choose Your Own Adventure book without frying out in a brilliant shower of sparks.

But When You Think About It ...

... maybe it is just a little weird that it nails the prediction 43 percent of the time. How many times have you messed up a handshake so severely that you felt like running away to live in the woods with nary a hand to shake? This algorithm will keep improving, and soon it could one day predict the most likely directions we'd run when in danger, or which body movements mean a human is reaching for its off button. Why can some people effortlessly dodge attacks in a video game like Dark Souls? How can video game speed runners race through difficult Super Mario World levels in record time? Easy: They're exploiting observed patterns. One day, an AI might observe enough of ours to make the switch to hunting us down for sport.

4
Teaching AI To Be An Annoying Roommate

Cornell and Stanford University

Look, people make mistakes, alright? Sometimes we leave the fridge open for a bit when we should have closed it. Sometimes we leave the milk out. Doesn't mean we need an artificially intelligent sentry who will bust our ass when we don't comply with basic kitchen decorum.

Orion Pictures
"PLEASE PUT BACK THE FUNYUNS. YOU HAVE 20 SECONDS TO COMPLY"

With only an Xbox Kinect sensor, a laser pointer, and a camera, a bunch of smartypants-types from Cornell and Stanford University have created a neural network version of Felix Unger from The Odd Couple -- a persnickety machine brain that rats on people who fail to fulfill simple household responsibilities. It's called Watch-Bot, and it's a real asshole.

Using whatever "probabilistic learning models" are, Watch-Bot records everything that happens in the area it's placed. For initial experiments, it was set up in a kitchen and an office. It recorded and analyzed 458 videos of people moving in and of its field of view, with some people purposefully misplacing objects. It knew what object was out of place 60 percent of the time. But Watch-Bot wouldn't be the most perfect autonomous cybernetic asshole ever if it didn't "Um, excuse me, but ..." every time you messed up.

The laser pointer is there so Watch-Bot can point out exactly what you're doing wrong at that moment. Leave the milk on the counter? Watch-Bot draws a sniper red dot on you. Stupid thing is already making threats.

But When You Think About It ...

Watch-Bot has no idea what it's looking at. It doesn't know what milk is, or that it can go bad if left out. Let's say you move a small plant on a kitchen windowsill so it can get sun. Watch-Bot doesn't know why you did that. Watch-Bot confused. Watch-Bot no understand concept of nourishment for living. Watch-Bot is the ultimate judge, for better or worse. All it knows is the cold letter of the law. It doesn't understand motive. It can't be talked to or reasoned with. Watch-Bot does not give a shit why, for example, you are out after the curfew mandated by Supreme Lord Trump.

Continue Reading Below

3
Understanding Human Behavior By Reading Awful, Awful Books

MIT is making an algorithm watch TV shows to learn how people greet one another. Stanford is taking the idea a step further. They're making their AI read hundreds of thousands of books to learn how people interact with physical objects. They figure that if we want walking, talking C-3POs all over the place, then machines needs to know, for instance, all the things we can (and will) shove in our butts for sexual gratification/boredom. The system is called Augur, and it just wants to get to you know a little better, baby.

CYCLONEPROJECT/iStock/Getty Images
"Hey, girl. You like licking a robot's charging port?".

The research team is training their algorithm to read books from a free online literature resource called Wattpad. Augur combs through billions of words from thousands of stories and logs the proper object-verb associations. Augur knows that when a person encounters a bench, they are likely to sit on it, and it knows all the ways we can sit. Slouching, sitting upright, we can lay on it, sniff it when the lady sitting on it walks away, rip ass all over it and blame pigeons -- you know, the common uses for benches. It's a great idea. Acclimate a machine to the myriad ways we humans do stuff so that it doesn't go bonkers and cry hydraulic fluid when a cool guy pulls an A.C. Slater and sits in a chair backward.

But When You Think About It ...

Wattpad, the site Augur is getting its reading material from, has all the public domain classics you read in high school and college, but it's mostly made up of fanfiction, erotic fiction, or a combination of the two. It's learning its object-verb associations equally from Tolstoy's Anna Karenina and the breadth of knowledge found in the "werewolf" category of literature. I'm sure it's picking up valuable lessons from such acclaimed tales as Doctor's Orders, the fanfic wherein Harry Styles of One Direction is, for some reason, a doctor in a Chicago hospital emergency room. This sheds light on a creepy part of AI. If the technology were to become democratized, as a lot of technology eventually does, what's to stop people from filling the head of the AI they bought at Walmart with the most vile, hateful, cruel philosophies the internet has to offer (which is already available on about 90 percent of the internet)?

Calling it now: In the future, there will be millions of boy band sexbots with synthetic brains made of 4chan.

2
Teaching AI To Teach Itself How To Play Video Games

Nintendo

For an hour and a half, an AI played Super Mario World. It played the way a mom trying stay in touch with her kids plays video games: terribly, but admirably. For a long time, it made no attempt to dodge enemies, instead choosing to run straight to the first one in its way. It learned to jump about four minutes in. Then it wouldn't stop jumping. It was jumping for no reason, jumping because it could. Minutes later, it was launching itself off of enemy heads over obstacles with the confidence of a veteran Mario speed runner.

When the software's creator, Seth Bling, decided it was time to call it a day, his AI still hadn't come close to beating the level. Not that it mattered. It already proved everything it needed to: That AIs suck at video games.

Here's how Bling made it happen:

Bling created a program he calls MarI/O. It only does one thing: learn to play Super Mario World. It learns by failing enough times to make a human child playing Super Mario World in 1991 toss their controller at a wall and cry. It saves the strategies that worked and discards the ones that didn't. It gets smarter with each Game Over. And he's not the only one doing it. At one point, there was a competition for AI Mario runs. Robots are stealing our jobs, so we might as well let them beat our video games too, and then have sex with our wives and attend our kids' PTA meetings. If MarI/O were a kid, its parents would scold it for not putting as much effort into its schoolwork as it does into video games.

But When You Think About It ...

It learns at an accelerated rate, and it applies the new ideas immediately. It's more persistent than any of us have ever been. Given the time, it would play and fail and learn over and over until it got it right. It doesn't get tired. It doesn't complain. It's perfect. When it began, it didn't even know it needed to move to the right. It just sat there. It keeps trying new things, building on what's already worked. This dumb little program that plays a Mario game is exactly the kind of impressive person we wish we could be. All it's doing with its life is play video games and somehow it has a more promising future than any of us.

Continue Reading Below

1
Using Twitter

Microsoft

Remember that Twitter bot Microsoft gave us and then quickly took back after the upstanding citizens of the internet corrupted it and turned it into a foul-mouthed, racist 9/11 truther? That was an AI designed to learn through Twitter. And it did, I guess. @TayTweets went from an adorable innocent teen to ...

Within 24 hours. And those are the nice ones without the racial slurs. Most times, Tay repeated what she was told. Other times, she would conjure up some weirdness of her own.

It was a shitshow. Damn internet. Always turning teenage girls into a living InfoWars comment section. If only someone would use Twitter AI for good. Maybe use it to, I don't know, put smiles on people's faces?

If you're one of the lucky ones who hasn't been devoured by the Twitter smile demons, those malevolent grins slithering across people's faces are the work of an AI-controlled Twitter bot called Smile Vector. It takes submitted pictures and adds either a smile or a frown, depending on what type of creeped out it wants you to feel. It's especially disturbing when it creates a blurry tremor over the eyes and mouth, like the person's face is vaporizing away to reveal the carnivorous lizard beast beneath.

Or when it transforms them into completely different people. Or when its grip on sanity slips.

But When You Think About It ...

Google, get that button ready.


Luis is uploading his consciousness onto the internet and making a kickass Geocities page for it. In the meantime, you can find him on Twitter, Tumblr, and Facebook.

See how dumb your computer problems are in 5 Things I Learned Fixing Your Computer and learn how to stop being dumb with them in 5 Common Misconceptions That Destroy Computers.

Subscribe to our YouTube channel to see why mashing the keys faster doesn't make you a better hacker in 5 Stupid Things Movies Believe About Hacking, and watch other videos you won't see on the site!

Also follow us on Facebook, and help appease our robot overlords.

To turn on reply notifications, click here

166 Comments

Load Comments