The 5 Most Terrifying Robot Advances in Recent History

Robots are terrifying. Anybody who says different is either a robot scientist or somebody who has been replaced by a terrifying, terrifying robot. Think we're overreacting? Think we're advocating hyperbolic, knee-jerk neo-Luddism for the sake of comedy? Probably! But then, if robots are so harmless, explain why science is ...

#5. Giving Computers Schizophrenia

Photos.com

Researchers at the University of Texas-Austin have done it! They've finally managed to transfer mental diseases to the realm of computers: They gave a supercomputer schizophrenia. G- ... good job, guys?

Was this an actual check box on the Big List of Scientific Accomplishment -- making crazy artificial intelligences? Are we sure that particular line wasn't penciled in after the fact by a disgruntled intern or something?

Ryan McVay/Digital Vision/Getty Images
"Wait, no...here it is: Schizophrenic Computer, right above Vacuum Cleaner with Chlamydia."

Well, it's too late now, because DISCERN is a very real thing. DISCERN is a neural network: an artificial mind created by simulating human brain connections. To describe the mechanism behind schizophrenia, scientists posited the hyperlearning theory, which states that schizophrenics retain too much information. They learn things they shouldn't, and can't keep the information straight.

Scientists then emulated schizophrenia in an artificial intelligence (we're pretty sure just typing that sentence is technically a war crime) by telling the computer a bunch of stories, letting it establish relationships between words and events, and allowing it to store them as memories with only the relevant details. It worked pretty well. Then they amped up the memory encoder, causing it to retain ALL details, relevant or not, and boom: Roboschizo. The computer lost track of what it was taught and could not relay any coherent narratives.

How crazy did it get?

At one point it claimed responsibility for a terrorist attack. It literally told the researchers that it had planted a bomb. The AI did this because it confused a third-person report about a terrorist bombing with a first-person "memory" that it retained. Through a simple computerized misfire, a supercomputer accidentally put itself in the role of a terrorist. We're pretty sure that was the plot to WarGames.

In another creepy example, the computer started talking entirely in third person, like a cybernetic version of the Rock (dibs on movie rights). It just didn't know which entity it was supposed to be anymore. DISCERN had developed a faulty sense of self. Hopefully they've already developed some sort of robotic anti-psychotics, or else the University of Texas-Austin scientists are sure going to have egg on their faces when the robots start eating them off.

Digital Vision./Digital Vision/Getty Images
"Hi! I am totally a scientist and not a face-stealing robot!"

#4. Teaching Robots to Lie

Comstock/Comstock/Getty Images

Scientists have taught a group of robots some strategies for deception and trickery, which is nowhere near as compelling as screaming "ROBOTS HAVE LEARNED TO LIE." So we're going with the latter.

ROBOTS HAVE LEARNED TO LIE, YOU GUYS.

These strategies were modeled after bird and squirrel behavior (because squirrels are apparently the tricksiest motherfuckers in all the animal kingdom), and were demonstrated when Professor Ronald Arkin from Georgia Tech's School of Interactive Computing had a robot navigate a course to find a hiding spot. Then he sent out a second robot to try to locate the first one, at which point the scientists would reward the winning bot for a job well done (presumably with cyber-blow and tiny robo-hookers).

John Foxx/Stockbyte/Getty Images
"You win! Your reward is that you get to destroy this in front of humans and watch them cry."

It worked like this: The bots were supposed to follow a path with preset obstacles that got knocked down as they progressed. One of them ran the course and then the other tried to follow the overturned markers to find the first. The hiding robot learned the system, however, and would deliberately knock over other obstacles just to create a false trail. It would then hide somewhere far away from the mess it had created. It's a simple tactic, but using it, the hiding droid was able to trick the seeker 75 percent of the time.

Again, that strategy was not programmed into the robot from the start. It's something it picked up and devised entirely on its own through trial and error. Good thing this is just a mild-mannered university experiment, right? Imagine if the military was using these literal Decepticons ...

Jupiterimages/Photos.com/Getty Images
"Liebot! Destroy them with your lies."

Aw, you got us: Of course they are! This harmless academic lark just happens to be funded by the Office of Naval Research. They're planning on using robots like these to "protect ammo and other essential supplies." So they're armed! And they hide really well! Sounds like a plan, traitors to humanity.

Oh, but maybe we shouldn't worry about these cunning and deceptive military robots: The developers have set out an Asimovian series of protocols for the robots to fulfill before they can lie. Here they are: The situation has to involve a conflict in which the robot is involved, and the lying robot has to benefit from the deception.

Creatas/Creatas/Getty Images
"Of course I didn't murder that family. I am just an adorable robot puppy."

That's it!

Don't trust the Roomba.

#3. Teaching Robots Ruthlessness

Photos.com

Scientists at the Laboratory of Intelligent Systems took a group of robots, a "food" source, and a "poison" source, and put them all together in a room. Good job, guys. It's maybe a little confused to try to poison a robot, but we commend the effort; truly you are a boon to the continued survival of mankind. Unfortunately, the robots didn't die. They simply learned the folly of mercy.

Photos.com
"Oh, it's not for you. I'm going to put it on your fucking grave."

See, the robots would get "points" for staying next to a food source they found and lose them for proximity to poison. The bots had little blue lights attached to them that would light up randomly (although they could also control the light if they wanted to, which you should remember, because it's going to come in terrifying later) and a camera to perceive said light. When the trials began, it didn't take the robots long to learn that the greater density of blue light was where the other robots were gathering -- i.e., where the food was. By emitting their blue lights at random, the robots were essentially screwing themselves; they were showing the others where the food was and giving them points.

Which is why, after a few trials, most robots stopped blinking their lights. Almost entirely. We set the robots to a task, and the first thing they did was refuse to help each other. It's probably good for humanity, all told. Somewhat worryingly, however, it didn't end there: Some robots headed away from food sources, blinking their light more, to lead others astray. They went full pied piper.

Photos.com
"Fuck robots!"

Recommended For Your Pleasure

To turn on reply notifications, click here

638 Comments

The Cracked Podcast

Choosing to "Like" Cracked has no side effects, so what's the worst that could happen?

The Weekly Hit List

Sit back... Relax... We'll do all the work.
Get a weekly update on the best at Cracked. Subscribe now!