5 Dark (But Hilarious) Times Big Tech Companies Failed Hard

Well, this is ... bleakly hilarious.
5 Dark (But Hilarious) Times Big Tech Companies Failed Hard

There's rarely a day when we don't hear about a tech company accidentally unleashing some sort of cyber-disaster on its user base. These traditionally come in two flavors: the funny, lighthearted error, and the one which ends with the litigation-proof apology email and the muted suggestion that maybe several million people should think about changing their password. As it turns out, though, there's a third type: the bleakly hilarious errors. And they're a lot more common than you might think.

Amazon Made A Sexist HR Bot

As one of the world's biggest companies, Amazon deals with a lot of job applications. We're not sure what percentage of those applications hail from the water sports community, but the cumulative amount was enough that in 2014, Amazon tasked one of their research groups with building an AI capable of filtering out "bad" applicants.

The good news? They managed to build a working demo. The bad news? It automatically disregarded any resume submitted by a woman. (We suppose shutting out half of the population is a way of cutting down on applications.) In order to help HRBot figure what a "good" application looked like, the development team fed it over a decades' worth of resumes, with the hope that it could learn how to identify a list of forerunner applicants that could then be passed on to the company's flesh-and-blood recruiters. The only problem was that the sheer sausage-factory-ness of that resume pile accidentally taught HRBot to weigh male candidates more favorably than female ones.

HRBot was more thorough than merely looking at the name at the top of each resume, though. It also analyzed the language of the resume in order to find any women who had sneaked past the initial purge. If a candidate included a reference to a women's college or a women's sport team, they were disqualified. If a candidate used language that didn't "read" male -- men are more likely to use verbs like "executed" and "captured" in resumes, for instance -- they were disqualified.

When technicians peeled back the system's code, they found it was weighing a candidate's gender characteristics more heavily than their technical knowledge or coding proficiency. Meaning that a frontrunner candidate could get away with knowing less than nothing about technology. Amazon tried to correct the bugs in the system, but no matter what they tried, its inner sexist would always win. In one of its final iterations, the team had written in so many safeguards against sexism that HRBot lost its mind and start recommending any applicant that was put in front of it. With no guarantee of success, or even that the system wouldn't self-engineer a way around any anti-sexism safeguards, the project was scrapped and placed in the same forgotten storage closet as the Fire Phone and Jeff Bezos' sexts.

Related: Amazon Thinks I'm Some Sort Of Serial Killer In Training

Tumblr Uses AI To Find Porn, Disaster Predictably Ensues

In December 2018, Tumblr decided to take a big dump over its business model by announcing it was banning all forms of adult content. The ban, which prohibited users from posting genitalia, pornography, erotic artwork, and whatever "female-presenting nipples" are, was also a purge of existing content on the site, regardless of its age, all for the apparent aim of creating a "better, more positive" experience. (We're still not quite sure how a site featuring nothing except for "male-presenting nipples" wouldn't be the scariest, serial-killerest thing imaginable, but we'd' probably get used to it eventually.)

In order to ensure that every vestige of boob, peen, hoo-ha, and rump was scrubbed, Tumblr announced that it wouldn't be leaving this sort of important work to their horny user base. No, they were outsourcing this incredibly important work to an AI -- one that was apparently programmed by a curmudgeonly grandma, because it was soon flagging everything vaguely erotic ... and a lot of things that were definitely not.

But what we think is really weird about this is **THIS CAPTION HAS BEEN FLAGGED AS EXPLICIT**

Despite this malfunction, the ban went ahead as planned. Tumblr wound up creating such a "better, more positive" space that owner Verizon is now looking for someone to take the site off their hands, presumably so that they can finally rest and (in their mind's eye) watch the sun rise on a grateful internet.

Related: The Alarming Way Netflix Customizes Itself To Our Tastes

Google Translate Turned Into A Creepy Nostradamus

We can't think of a single occasion in the last few years when we've used Google Translate. If you're like us, it might be wise to get back into the habit, though, because there's a wealth of evidence to suggest that this isolation is making the site, well ... a little insane.

In June 2018, bored monolinguals / high-schoolers cheating on their language homework stumbled upon what at first glance reads like the opening pages of a bad conspiracy novel. When an assortment of commonplace words like "dog" or nonsensical words like "ag" were typed repeatedly into Google Translate and converted into English, the site started spitting out vaguely sinister religious teachings.

Entering the word "dog" 19 times and converting the text from Maori to English, for instance, resulted in Translate providing this "helpful" suggestion that the poster should stop screwing around and start preparing their soul for the coming apocalypse.

Maori Translate from English dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog Edit
Google, via Vice

English Doomsday Clock is three minutes at twelve We are experiencing characters and a dramatic developments in the world, which indicate that we are

Typing "ag" in any amount and converting from Irish to English, meanwhile, resulted in the site providing passages that only served to get weirder with every additional "ag," until it delivered its final pronunciation of what we imagine a holy error code to look like.

Somali Translate from Irish ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag ag
Google, via Vice

English Deuteronomy NetwNUESH NOW YOOS NEEDTH OF YOOSNOM OE AGING NAME AND LOAD NUMBERS OF THE AGENCY NON - Numbers at the ages of a I agon agon ag L
Google, via Vice
We see they're getting a bit more lax on the whole "Don't be evil" philosophy.

Other translations were slightly less cryptic, as one user discovered when they translated "goo" (typed 13 times) from Somali to English, and were told that they should "cut off the penis into pieces, cut it into pieces."

Maori ho wa re yo u Redigera your time is coming D K X Engelska KY Feedback
Google, via Reddit
If you ask for it translated into sign language, it just reads "Knife sharpening gesture, malevolent stare."

People began freaking out about "TranslateGate" being the product of an honest-to-god ghost in the machine -- or even worse, evidence that Google was using the contents of people's email inboxes to train the electronic brain that makes Translate tick. But when smarter people got their shot at looking things over, they suggested that the ominous messages were the result of Google's neural network struggling to make sense of the nonsensical inputs, which resulted in it giving equally nonsensical outputs. Add in the fact that Translate was likely taught Maori using a copy of the Bible, and suddenly this mystery doesn't seem quite so spooky. (We have no idea how to explain that "cut the penis" thing, though. That was definitely ghosts.)

Related: 5 Reasons You Should Be Scared Of Google

YouTube Thought The Notre-Dame Fire Was A Hoax

In April 2019, the iconic Notre-Dame Cathedral in Paris burnt to a crisp, the latest act in the Universe's long-running vendetta against the building. The news of the fire shocked the world and caused countless people to immediately flock to YouTube, where outlets like CNN, Fox News, and France24 were streaming video of the event, live and in as much color as a sooty cloud allows. While the world was watching, however, YouTube's sophisticated algorithms were skittering about the place and warning visitors that the stream everyone was tearfully clinging to contained copious amounts of "misinformation."

It's not quite clear what part of "This building is on fire" they considered to be fake news, but YouTube was so sure that it soon also started suggesting that people check out the Wikipedia article for September 11. You know, to flex about what a real disaster looks like.

Or at least, we hope that was the intent. We know that YouTube has a problem with conspiracy theorists (and other terrible people) gaming its algorithms. We hope the rot hasn't gone so deep that YouTube's first instinct when it sees a disaster it considers a lie is to start promoting another disaster (which it also thinks is a lie).

Related: There's Something Much Scarier On YouTube Kids Than Momo

Facebook Thought An Indonesian Earthquake Needed The Ol' Razzle Dazzle

We know that Facebook has been through some stupid-bad stuff during, oh, the entirety of its existence, but the law of averages states that at some point, the site has to do something good purely by accident. This is not that story.

In the aftermath of the earthquake that struck the Indonesian island of Lombok in August 2018, killing 98 people, worried friends and relatives of the locals flocked to Facebook to inquire about people's safety and generally spread messages of hope. A lot of these messages included the word "selamat," which depending on context can mean either "safe" or "congratulations."

Guess what meaning Facebook went with?

"Those are ... um ... sympathy streamers."

After everyone clicked "Share," they were greeted with their post and a carnival of digital delights that included balloons, confetti, and a whole lotta party vibes. Suffice to say, no one was impressed.

When Facebook was called to account for the bad-taste attempt by its algorithm to lighten the mood, a spokesperson said that this was an error that would never be repeated again. The animation that triggered when someone wrote "congratulations" was turned off in that area forever, while the culprit algorithm was told to machine-learn how to read a room.

Adam is on Twitter and has a newsletter about depressing history!

For more, check out Hacking The Digg Algorithm - Agents Of Cracked - Episode 6:


Also, we'd love to know more about you and your interesting lives, dear readers. If you spend your days doing cool stuff, drop us a line at iDoCoolStuff at Cracked dot com, and maybe we can share your story with the entire internet.

Follow us on Facebook. It only takes a click.

Scroll down for the next article
Forgot Password?