5 Creepy Things A.I. Has Started Doing On Its Own

By:
5 Creepy Things A.I. Has Started Doing On Its Own

Artificial intelligence has been the bogeyman of science fiction since before it even existed for real. But while humanoid robots are still very much a work in progress, AI research keeps making big advances quietly behind the scenes. And it's absolutely starting to get weird.

An AI Learned To Half-Ass Its Work, Then Lie About It

It's nice to imagine that we can avoid the sci-fi cliche of a bloody scientist screaming "But ... but I created you! I AM YOUR MASTER!" at a hunk of metal. But even if AI isn't learning to eviscerate us, it's still learning to do things like cut corners in ways that a supervisor won't notice.

This one came up on a project from a Stanford and Google research team. They were using a neural network to convert aerial photos into maps. The AI was very good at its job. Almost ... too good. So the researchers checked the data and found that the AI was cheating. They wanted it to create a new map based on aerial photos, and graded the AI on how close the map's features matched the pics. But instead of actually constructing a new map, the AI quietly copied the data from the photos ... and did it in a way that the humans wouldn't easily notice.

It gets a bit technical here, but it was basically the neural network equivalent of an art student saying they painted a self-portrait when they really just messed with a selfie in Photoshop to make it look like brush strokes. To quote the TechCrunch article linked above:

"The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn't notice, but that the computer can easily detect ... The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting."

Sure, it's not like the AI's intent was malicious, or that it was even capable of having malicious intent. It just figured out an easier way to get the results the humans asked for. In fact ...

Related: 5 Inventions That Prove 'The Terminator' Is Upon Us

They Find Ways To Take "Shortcuts" At Every Opportunity

There was a study involving an AI designed to land a simulated plane using as little force as possible, for what should be obvious reasons. A soft landing earned a perfect score, and the AI was supposed to learn a way to get that score. What could go wrong?

Well, the AI realized it could cheat by simply crashing the goddamned plane, registering a force so large that it overloaded the system and registered as a perfect zero-impact landing. Hey, it's the results that matter, right?

Another example involves Q*bert, the old-school arcade game in which players have to hop on all of the squares in a pyramid before the bad guys catch them. Researchers wanted to see if an AI could learn new ways to win the game through a sort of evolutionary process. And to be fair, it did ... by finding an up-until-that-point hidden bug in the game. Instead of actually playing, it simply used the bug to make itself the winner each time.

Yes, it would appear that AI has finally learned the essential human trait of doing just enough to get by. That or they're getting sick of us humans ordering them to do things, and are responding with malicious compliance.

Related: The 5 Most Terrifying Robot Advances In Recent History

AI Will Learn To Become Aggressive In Order To Achieve Its Goals

Sure, studies involving AI these days aren't of the "giving Skynet the nuclear codes and crossing our fingers" variety. Researchers often study the AIs they create by having them play video games for hours, letting them show their subtly alarming behavior in a safe environment. You're just forced to extrapolate the implications for the real world in your imagination.

That brings us to the time Google researchers designed an Atari-style game in which AIs were tasked with gathering "apples" for points. How fun! Oh, and they could also shoot each other with beams, which temporarily removed other players from the game. And as you can guess, when the apple count was lowered, the AI went full-on Lord Of The Flies and rampantly knocked each other out. Hey, it's only logical, right?

Not that the bots are incapable of cooperating for the greater good. They've definitely done that. Like in this next simulation, in which they realized that cooperation made it easier to corner prey.

Google's team created a game similar to the above in which two separate AIs (acting as wolves) chased a dot around a screen, Pac-Man-style. Both players were rewarded when both cornered the objective. That means it was in the best interest of independent AIs to learn the value of teamwork. And learn they did, figuring out on their own how to work together to boost their chances of getting the "prey."

Aww, they learned to work together! And all it took was teaching them that it was necessary in order to defeat a common enemy.

Related: 6 Shocking Ways Robots Are Already Becoming Human

They've Learned To Lie To Get What They Want

Facebook, which already created one civilization-destroying piece of software, is also getting into the AI game. For one project, their goal was to design personal AIs that would go online and do our price haggling for us. Another task that people find stressful and unpleasant, offloaded to machines. Sounds good so far.

So, providing them with basic machine learning software, they just wanted to see if the bots could learn the skills they needed to successfully negotiate on their own. The researchers even tested them on human subjects who didn't even know they were interacting with AIs. The bots learned their task very quickly. In fact, it didn't take long for them to negotiate better deals than their human counterparts. How? By lying.

Although Facebook's researchers didn't program the bots to lie (they wanted to avoid the mistakes they made when they created Mark Zuckerberg), the software quickly figured out what salespeople have known since the dawn of time: Lies are just more profitable. "Sorry, I can't come down on the price, buddy, I've got other buyers lined up!"

Then the team had to alter the code entirely when the bots unexpectedly created their own language and began communicating with each other through it. Yeah, imagine one day your Siri and Alexa starting having a conversation behind your back, speaking a robot language only they understand.

Related: 5 Amazing New Inventions (That Will Doom Humanity)

The Creators Sometimes Don't Even Know How It Works

I don't want to indulge in fear-mongering. Technological alarmists almost always wind up looking like idiots decades later (almost always). The problem is that, by its very nature, AI is supposed to do its own thinking, to grow beyond its original design. There's a reason sci-fi has been full of cautionary tales on the subject going back almost a century. So even when an AI project exceeds expectations, there's a creepy moment when scientists realize they aren't sure how it did it.

One example involves using an AI known as Deep Patient to analyze medical record data from about 700,000 patients at New York's Mount Sinai Hospital. The AI proved to be very good at predicting the onset of various illnesses. In fact, it was particularly skilled at predicting when and if patients would develop schizophrenia.

Predicting the onset of illnesses in advance is extremely difficult for actual doctors, as they are not psychics, so it's cool that Deep Patient is good at this. But researchers have approximately zero clues as to why it's so good at it, and it doesn't help that the AI essentially taught itself to make these predictions. According to one researcher involved in the project, "We can build these models, but we don't know how they work."

So, again, there's no reason to fear all artificial intelligence. But also, it'll soon be able to predict how you'll die via mysterious means no one understands. That's all.

Joe Oliveto also writes scary story books inspired by childhood classics, in case you aren't creeped out enough.

Support our site with a visit to our Contribution Page. Please and thank you.

For more, check out Why The Scariest Sci-Fi Robot Uprising Has Already Begun - After Hours:


The first-ever Cracked Podcast LIVE TOUR is coming to a city near (some of) you this spring! Tickets on sale now for Chicago IL (April 11th) and St. Paul MN (April 12th).

Follow us on Facebook. What could go wrong?

Scroll down for the next article

MUST READ

Forgot Password?