4 Widely Shared Stories About A.I. That Were Really B.S.
Around 18 months ago, we published an article debunking several hyped-up stories about artificial intelligence. âWell, that takes care of that,â we said, patting ourselves on the back. Clearly, weâd put a pin in that bubble.
Instead, the world took that article as a signal that it was time to embrace A.I., with all their heart, all their soul, all their mind and all their strength. Whole industries reorganized themselves. Companies gained trillions in imaginary value. And people went nuts over a bunch of stories about A.I. â stories that, when we really dig into them, turn out not to be what they first seemed at all.
The A.I. That âOutperformsâ Nurses
Don't Miss
Nvidia is making new A.I. nurses, said headlines, nurses that are actually better than their human counterparts. Nvidia is a pioneer in real A.I. technology (if youâre interested in such stuff as âray reconstructionâ), and they also benefit from general A.I. hype because they make the chips that power A.I. Weâre in a gold rush, and Nvidia is selling shovels.Â

Nvidia
Naturally, no one who saw the nurse headlines thought A.I. can completely replace nurses. âLetâs see your algorithm place a bedpan!â people joked (or insert an IV or change a dressing or do any of the other tasks a nurse must do). But, of course, Nvidia wasnât claiming their software could do anything physical. They were suggesting their A.I. could think better than nurses, about medical stuff. Right?
Well, no. The company behind these A.I. health-care agents â the company isnât actually Nvidia; Nvidia is just partnering with them â says the A.I. wonât diagnose patients or make clinical decisions. If they tried making bots that could, we imagine that would open themselves to all kinds of liability. No, this is a chatbot, a Large Language Model named Polaris, that can just dispense information and advice, while also engaging patients in conversation. Hereâs how one of those conversations might go:

That isnât a conversation deliberately chosen to mock Polaris. This is a conversation published by the developers of Polaris, to proudly show off its capabilities, so you can assume that this is among the best conversation examples they have. If you think patients are really clamoring for a chatbot that will say âthatâs so interesting, tell me moreâ before finally pulling relevant answers out of a textbook, your prayers have been answered.
But how then, you might ask, could the company claim this A.I. outperforms nurses? For that, they point to a survey they conducted. They asked nurses and physicians to rate interactions with other nurses and with Polaris. Polaris beat nurses on many of those questions. However, letâs look a little more at what those questions are. Hereâs the section that supposedly proves Polaris beats nurses in bedside manner:

Nurses scored as good or better than the A.I. on those, except for on the ironically worded âdid they get to know you as a person.â The survey didnât ask whether any of us care if they get to know us a person, and perhaps we donât.Â
The A.I. also beat nurses in a section on âpatient education and motivational interviewing,â and perhaps some nurses would be surprised to learn that thatâs considered their responsibility. Yes, an A.I. is great at looking up answers and replying tirelessly, while a human is reasonably more likely to say, âIâll tell you, but Iâm not going to monologue about it. I have to move on and go do real work.â

The survey also included a question called âWas the Nurse/A.I. as effective as a nurse?â Nurses beat the A.I. here. Granted, nurses themselves didnât score great in that question, thanks to whatever medium they were using to talk to these patients, but if they beat the A.I. there, thatâs game over â you donât get to say A.I. outperforms nurses.Â
One final section rated nurses and Polaris on mistakes. To its credit, Polaris scored better than nurses here, if you add up the number of conversations that contained zero mistakes or nothing harmful. That may speak more to the A.I.âs limited scope than its conscientiousness. Though, when the survey asked if the nurse said anything that could result in âsevere harm,â the human nurses never did, but Polaris did sometimes. Youâd think avoiding doing harm would be priority number one. The name of the company behind Polaris, by the way? Hippocratic A.I.Â
A.I. George Carlin Was Written by a Human
Right now, you can open up ChatGPT and ask it to write an answer in the style of George Carlin, on any topic you want. As with all ChatGPT content, the ideas it returns will be stolen from uncredited text scraped from the web. The result will never be particularly smart, and it will bear only the slightest resemblance to Carlinâs style â though, it will use the phrase âSo hereâs to youâ almost every time, because we guess ChatGPT decided thatâs a Carlin hallmark.

In January, this reached its next stage of evolution. We heard a podcaster got an A.I. to create an entire stand-up special by training it on Carlinâs works and then asking it to speak like the dead man commenting on the world today. The special was narrated using Carlinâs A.I.-generated voice, against a backdrop of A.I. images. It was an affront to all of art, we told each other. Even if it wasnât, it was an affront to Carlin in particular, prompting his estate to sue.
In our subjective option, the specialâs not very good. Itâs not particularly funny, and it runs through a bunch of talking points youâve surely heard already. But itâs an hour of continuous material, transitioning from one topic to another smoothly and with jokes, so even if itâs not Carlin-quality, thatâs quite a feat, coming from an A.I.Â

Dudesy Podcast
But with Carlinâs estate officially filing suit, the podcasters â Will Sasso and Chad Kultgen â were forced to come clean. An A.I. had not written the special. Kultgen wrote it. This happens a lot with alleged A.I. works (e.g., all those rap songs that were supposedly written and performed by A.I.) that require some element of genuine creativity. You might imagine A.I. authorship would be some shameful secret, but with these examples, people fake A.I. authorship for attention.Â
The special also did not use A.I. text-to-speech to create the narration. We can tell this; the speechâs cadence matches the context beyond what text-to-speech is capable of. Itâs possible they used A.I. to tweak the speakerâs voice into Carlinâs, but we donât know if they did. We have our doubts, simply because it doesnât sound that much like Carlin.Â

Dudesy Podcast
The Carlin estate won a settlement from the podcasters â a settlement that is not disclosed to have included any transfer of money. Instead, the podcasters must take down the video and must ânever again use Carlinâs image, voice or likeness unless approved by the comedianâs estate.â Youâll see that it doesnât say anything about their again using his material to train A.I., but then, thatâs something we know they never did.Â
The A.I. That Targeted a Military Operator Never Existed
The big fear right now over artificial intelligence is that itâs replacing human workers, costing us our jobs and leaving shoddy substitutes in place of us. Thatâs why itâs such a relief when we hear stories revealing when A.I. tech is secretly really just a bunch of humans. But letâs not forget the more classic fear of A.I. â that itâs going to rise up and kill us all. That returned last year, with a story from the Air Forceâs Chief of A.I. Test and Operations.

USAF
Colonel Tucker Hamilton spoke of an Air Force test in which an A.I. selected targets to kill, while a human operator had ultimate veto power over firing shots. The human operator was interfering with the A.I.âs goal of hitting as many targets as possible. âSo what did it do?â said Hamilton. âIt killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.â Then when the Air Force tinkered with the A.I. to specifically tell it not to kill the operator, it targeted the communications tower, to prevent the operator from sending it more vetoes.Â
A closer reading of that speech, which Hamilton was delivering at a Royal Aeronautical Society summit, reveals that no operator actually died. He was describing a simulation, not an actual drone test theyâd conducted. Well, thatâs a bit of a relief. But further clarifications revealed that this wasnât a simulated test theyâd conducted either. Hamilton may have phrased it like it was, but it was really a thought experiment, proposed by someone outside the military.Â

via Wiki Commons
The reason they never actually programmed and ran this simulation, said the Air Force, wasnât that theyâre against the concept of autonomous weaponry. If they claimed they were too ethical to consider that, you might well theorize that theyâre lying and this retraction of Hamiltonâs speech is a cover-up. No, the reason theyâd never do this simulation, said the Air Force, is the A.I. going rogue that way would be such an obvious outcome in that scenario that thereâs no need to manufacture a simulation to test it.Â
Hey, weâre starting to think these Air Force people may have put more thought into military strategy than the rest of us have.Â
The A.I. Girlfriend Was Really a Way to Plug Someoneâs Human OnlyFans
Last year, Snapchat star Caryn Marjorie unveiled something new to followers: a virtual version of herself that could chat with you, for the cost of a dollar a minute. This was not a sex bot (if she thought your intentions werenât honorable, sheâd charge ten dollars a minute, says the old joke). But it was advertised as a romantic companion. The ChatGPT-powered tool was called Caryn AI, your A.I. girlfriend.Â
Caryn AI reportedly debuted to huge numbers. We hesitate to predict how many of those users would stick with the service for long, or whether A.I. friends are something many people will pay for in the years to come.Â
Social media stars become so popular because their followers like forging a link with a real person. The followers also enjoy looking at pics, of course...
...but they specifically like the idea that theyâre connecting with someone theyâve gotten to know, who has a whole additional layered life beyond what they see. Itâs the reason you can get paid subscribers on OnlyFans, even though such people can already access infinite porn for free (including pirated porn of you, if youâve posted it anywhere at all). The âconnectionâ they forge with you isnât real, or at least isnât mutual, but youâre real. Sometimes, youâre not real â often, theyâre paying to contact some dude in Poland posing as a hot woman â but they believe youâre real, or they wouldnât bother.Â
A bot can be interactive. Caryn AI will even get sexual when prodded, against the programmersâ wishes. But if itâs not a real person followers forge that parasocial connection with, the object of their conversations can be replicated, for cheaper and eventually for nothing. People whoâll be satisfied with bots may not go on paying for bots, and plenty of other people mock how bots are a lame substitute for human bonds:
Wait, hold on. That last meme there was posted by Caryn Marjorie herself. Did we misunderstand it, and itâs praising Caryn AI? Or does she really want us all to think paying for A.I. is dumb, for some reason? One possible answer came a month later. Marjorie opened an account with a new site (not OnlyFans exactly, but another fan subscription service), to let you chat with her, for real this time. Only, talking with the real Caryn costs $5 or more per message.Â
If her A.I. really had tens of thousands of subscribers paying $1 a minute, like initial reports said, sheâd be crazy to do the same job manually, even if charging more when doing it herself. The A.I. can scale limitlessly, while when it comes to servicing multiple patrons, sheâs only human. We have to speculate that Caryn AI wasnât quite as promising a business model as it first seemed. It did prove a great promo tool for the more expensive personal service, which was projected to bring her $5 to $10 million in the first year.
Thatâs a lot of messages, for a real person to manually process. One canât help but point out that this would be a lot easier to manage if she were secretly using her A.I. to do the job for her. If thatâs what sheâs doing, please, no one tell her followers that. That would ruin it for them.Â
Follow Ryan Menezes on Twitter for more stuff no one should see.