5 Hyped-Up A.I. Stories (That Secretly Make A.I. Look Really Dumb)

Any signs of intelligence here were added artificially.
5 Hyped-Up A.I. Stories (That Secretly Make A.I. Look Really Dumb)

Will artificial intelligence take over humanity? Will it rise up and slay everyone? Did it already do so? Are we secretly A.I., living in a matrix made by humans? 

We don’t know. But we do know two things. 1) We need to put periods after each letter in “A.I.,” to prevent people from thinking we’re writing about some guy named Al. And 2) Many stories about A.I., when we really dig into them, leave artificial intelligence looking a whole lot less intelligent than we thought. 

The Google Dude Who Called His A.I. Sentient Was Really Deluded

The Story:

We all rightly fear Google, whose tech keeps growing stronger, and who quietly removed “don’t be evil” from its induction oath. So we got super scared when we learned this year that the Google lab had cooked up an A.I. that turned sentient. Our source for this assessment of the A.I.’s humanity? Oh, just an actual Google engineer. 

This A.I., a bot named LaMDA, responded to conversations by claiming personhood and fearing being turned off. Most damning: Once the engineer, Blake Lemoine, went public, Google put him on leave ... then fired him. They were trying to silence him.

Blake Lemoine

Google

You can still Google him. Google search is ALSO sentient, and on his side!

But Then:

LaMDA is not sentient. LaMDA is a chatbot.

We might be able to program a sentient A.I. one day. We might be able to program desire, intent, and self-awareness into an A.I., leading to much debate into what exactly consciousness and sentience really are. But LaMDA isn’t that. LaMDA generates responses based on text it finds on the internet. It generates good responses, which means it finds good text and properly applies it to your questions, but that does not make it sentient. 

Let’s put it this way. Characters in video games have needs and make decisions. You would probably agree that they are not sentient. And yet they are more sentient than LaMDA. They are at least on the path toward what we may one day consider A.I. sentience. Bots that scrape the internet for responses are not. 

Codsworth Fallout 4

Bethesda

Codsworth liked that. 

Lemoine did not design LaMDA, he just chatted with it, so despite his credentials, he has no insight into its inner workings. Someone can see a video and swear they witnessed sentience, but their testimony means nothing—the person who originally said the words was sentient, but the recording is not. Other scientists who also chatted with LaMDA say it’s not sentient. Scientists inside and outside Google have warned of the very real danger of people wrongly perceiving A.I. as sentient, and Lemoine’s story is an example of that. Still an interesting story, but it’s not the story of an A.I. turning sentient and the silencing of a whistleblower (Google fired him for violating data security policies, and his firing has not silenced him). 

Every other scientist who looks at LaMDA says it’s not sentient. Most notably, the original Washington Post story that shared Lemoine’s fears with the world talked to various other scientists, and they all said LaMDA isn’t sentient. The reporter herself concluded that LaMDA isn’t sentient, and the article reflects this. For this reason, scientists debunking Lemoine’s claims also call out the Post as unethical for choosing the headline “The Google engineer who thinks the company’s AI has come to life.”

Forget tech ethics, let’s talk clickbait ethics. If we approached the story with the same angle that the reporter did, our headline would have called Lemoine “The bonkers Google engineer who thinks the company’s A.I. has come to life.” The Post probably has rules against that kind of language, but at least that would have told you what sort of info to expect. 

Now for another story about bots terrifyingly gaining life in a tech giant’s evil lab:

A.I. Creating Its Own Language Was Less Impressive Than What It Was Supposed To Do

The Story:

This one’s a bit older than the other stories we’re looking at today; the others are all from this year, while this one’s from 2017. Facebook has a team called Facebook’s Artificial Intelligence Research. Its acronym, FAIR, is exactly what we’d expect from a supervillain organization with a taste for irony. FAIR was building its own chatbots, when it discovered that the bots had created their own language. Frightened about what the A.I.s may be plotting, in a language we can’t understand, Facebook killed all the bots involved in the project. 

Mark Zuckerberg F8 2019 Keynote

Anthony Quintano

But not the bot running the company. 

But Then:

We understood their language just fine. We just preferred they speak English. 

The goal was for these bots to speak English. Facebook trained them for communicating with humans, so they had to learn to talk like humans. The problem: In setting the A.I.s to talk among themselves, the programmers made them prioritize efficiency, rather than mandating that they stick to using words the way we do.

So as their conversation progressed, they used words as a shorthand for longer phrases. For their task, two A.I.s named Bob and Alice had to negotiate trading hats, balls, and books. Here’s a partial transcript:

Facebook AI text

The Independent 

Headline: Robots are coming for your balls

Maybe strange text strings looks sinister to you, but that's just xenorobophobia. They are using words to assign values to those various items. Despite reports, this didn’t shock scientists, and the scientists successfully deciphered what the bots were saying. For example, a bot repeating one word five times might say that it wanted a price of “5” for a ball. This worked very well for both bots and took marginally less processing power than using English grammar. But FAIR wanted them to learn English, so they reset the bots and started over.

They found it interesting that the bots shifted language in the way they did, but “computers use language laypeople don’t know” is not new or scary. That’s the first trick computers ever learned. 

Sign up for the Cracked Newsletter

Get the best of Cracked sent directly to your inbox!

Those A.I. Images Are Cool, But Their Limitations Are Hilarious

The Story:

If you, like us, lack physical form and experience everything through the internet, this summer was the summer of Dall-E. This A.I. model (which most of us accessed through a tool called DALL-E Mini, now renamed to Craiyon) creates images from prompts. The results are amazing. Or horrifying. Or hilarious.

Dall-E example image

Craiyon

Also, you have to share the collage of nine cropped images, rather than a single complete image. That is the unwritten social rule.

But Then:

Hey, have all the fun you want with the tool, don’t let us stop you. However, once you get over the basic novelty that machine learning tools can find elements and combine them into something often coherent, you start to look at where the tool falls short. And here’s where it gets really interesting.

First, something everyone noticed: Faces look messed up. At best, you see this strange mosaic effect, with many pieces that don’t quite fit together. Other times, you just see a blur. 

Dall-E example image

Craiyon

We’re mentioning this here because it’s the most obvious flaw, but it’s not really a flaw at all. Dall-E skips scraping recognizable faces when grabbing source material. The programmers wanted to avoid violating people’s privacy, so its human face databank is pretty scrambled. You can also produce clearer faces with the right commands.

Here’s a less obvious failing. When you ask Dall-E for a diagram demonstrating some scientific concept, the result is gibberish. The source images make sense, but Dall-E doesn’t really understand the parts that it scrapes and then combines. To prove that, we asked Dall-E for a diagram showing proper condom use. The result, showing monstrous penis fingers, cannot be shared in our family publication, so here’s a different example. 

Dall-E example image

Craiyon

Dall-E also falls short when it comes to representing text. This surprises some people, since computers deal with text more easily than they deal with images, right? But Dall-E isn’t actually processing text here, it’s finding images of text, breaking them apart, and then putting them together. We tried counting exactly how many pieces in the following diagram are real letters, and the result was somewhere between ℓ and ₥.

Dall-E example image

Craiyon

The A.I. Designed To Hunt Down Tax Evaders’ Swimming Pools (Was Pretty Lame)

The Story:

Tax collectors have now turned to using artificial intelligence, in the most terrifying collaboration since Sheeran allied with Bieber. France used A.I. to examine homes from the air, to detect every swimming pool in the country. France taxes improvements on homes, and when homeowners failed to declare pools they’d built, the Ministry of Economy and Finance nailed them using their chlorine-detecting bots. This project spotted 20,000 pools, which they estimate will net $10 million in new tax revenue this year and $40 million once they expand the program. 

swimming pool

Asso Myron/Unsplash

Plus, innumerable spy pics of French sunbathers.

But Then:

Yeah, the story’s kind of scary. The scary part is that high-resolution satellite imagery can photograph us at any time, even when we’re in our own back yards, within walls guarded by Merovingians. But then, we’ve had photos like that for years now. That tech is older than some of the people reading this. 

As for this pool-finding algorithm, we’re not so impressed. All it did was scan photos for blue rectangles. Let’s not drop our jaws at the thought that A.I. can identify colors and shapes. We already take it for granted that phones can identify faces—not just the broad concept of faces, like how this algorithm finds blue rectangles, but can distinguish one face from another. Of course they can search pictures for rectangles. 

Dall-E example image

Craiyon

Well, some A.I. is good with faces anyway. 

Since the algorithm searched for blue rectangles rather than some more specific pooly criteria, it screwed up a lot, prompting its programmers to sentence it to longer sessions of machine learning. When pools weren’t rectangular, or when trees or other structures partially shaded them, the algorithm missed them. It also wrongly flagged other kinds of rectangles as pools. It identified blue tarps as pools. It identified solar panels as pools (France doesn’t tax homes extra when they have solar panels, but let’s not give them any ideas). 

This does bring up the real danger of A.I., though. Not that it will get too smart but that it will stay stupid, and people will use that as an excuse to attack you

A Record Company Hired An A.I. Rapper (Who Turned Out To Be Three Humans In A Trench Coat)

The Story:

Starting in 2019, there arose on TikTok a rapper named FN Meka, who was not actually a person at all. FN Meka was an A.I. He appeared as an animation, his lyrics were generated using a program, and his vocals too were spat out by software. The company behind him, Factory New, set their algorithm loose on a vast trove of words from the ether, and it came up with rap lyrics. Translating those lyrics to audio was then a simple job for a custom voice-to-text program

FN Meka really gained fame this year, when Capitol Records signed a deal with him. Wow, a deal with an A.I.! How is that even legal? Then just 10 days later, Capitol dropped FN Meka, thanks to controversy over racial stereotypes in his lyrics. Ha, ha, an A.I. turned out to be racist—again!

But Then:

FN Meka was not A.I. Though there is no single person named “FN Meka,” people wrote his songs, people performed his songs, and people mixed his songs, making his stuff no different from any other songs, other than being kinda crappier. 

Factory New called Meka a “robot rapper” in his TikTok profile, but you can say whatever you want in TikTok profiles. Tomorrow, Kanye could (and probably will) call himself a being of volcanic energy sent from the future; that’s his right, but that doesn’t make it true. When pressed, Factory New admitted Meka’s backstory was “meant to create intrigue and provide cover for songs at the time which weren’t ready for scrutiny. FN Meka’s vocals have always been written and performed by humans.”

We’ve had virtual artists before. The Gorillaz, featuring the frontman from Blur, started more than 20 years ago. Their videos feature cartoon performers, while the touring band comprises a rotating roster of humans. 

Virtual bands don’t even need to be animated. The Monkees consisted of four humans, and while members took turns singing lead, their first songs had nameless other people playing the instruments and singing backing vocals, which were then falsely credited to the known guys. We don’t know why so many virtual bands are ape themed. Neither the Gorillaz nor The Monkees were made to sell NFTs—though FN Meka was originally made to sell NFTs

As for signing a contract with a non-human entity, that too is far from unprecedented. Corporations themselves are non-human entities that are treated as legal persons. 

The moneymaking idea behind FN Meka wasn’t “let’s build a robot, so it can pump out music endlessly!” It was “let’s build an avatar, so that the songwriters and singers never gain fame and we can always cheat them out of the money we owe them.” And cheating artists and writers is as old as the recording industry itself. 

Follow Ryan Menezes on Twitter for more stuff no one should see. 

Scroll down for the next article
Forgot Password?