4 Unexpected Ways Technology Is Policing Our Behavior
Man, what an age we live in. 100 years ago, people were riding horses to work, and work was just you trying to kill the horse so you could feed your family. Now look at us. We have Taco Bell being delivered to us via drone, probably. But despite all of this, our world is still full of pesky humans breaking society's rules. You know what we need, right? Technology that will make it impossible for anyone to ever be dishonest about anything.
In fact, already we have ...
DNA Testing To Stop Food-Spitters And Pooping Dogs
It wasn't so long ago that using DNA to solve crimes was only something CBS dramas did. Now DNA testing is so commonplace that they're using at Chili's. I don't know if you've ever been to Chili's, but I can basically describe it as "What if 1987 was a food?"
In 2015, a customer at a Chili's in Syracuse became suspicious that a waiter had spit in his drink. Apparently, this is NOT Chili's policy, so keep that in mind on your next visit. The customer complained to their waiter that their broccoli was undercooked, and the world's most sensitive waiter had an utter shit fit about it. And then, on his way home, the customer noticed that the lid popped off his to-go cup in the car, and there was spit floatin' in it. In his own words, "It was definitely a loogie." Again, nothing that Chili's usually does. Ruby Tuesday, maybe.
Not content to drink another man's throat coat, the customer complained to the restaurant, which denied everything. Then the cops got involved, and they took samples from the cup and the waiter. One big helping of science later, they determined that the waiter totally spit in the drink, and he was forced to pay $125 and got a one-year conditional discharge.
Not that there aren't shenanigans to root out on the customer end. When one dissatisfied KFC customer took to Facebook to complain that he'd been served a salty, crispy portion of Kentucky fried rat, the man got himself a lawyer and the rat was handed over for independent lab testing. Those tests confirmed that it was in fact just chicken with a "tail" made of breading. No lawsuit for you, sir!
As testing becomes cheaper/easier, we're going to see all kinds of petty shenanigans resolved this way. Did you know there's a company called PooPrints out there that will genetically test dogshit to find out who's not been cleaning up after their pet? And then they'll drive to the dog owner's house and throw the feces at their front door? (Note: I am now told that they do not do this second part.)
The way it works is that homeowner's associations and building management require pet owners to submit a sample so that it can be matched in the event of an unscooped turd later. And hey, that means less shit on your shoes and fewer wannabe Tyler Durdens befouling our food. Great! There is no possible way this can ever be abused! Yes, I can't wait to get a plumber's bill from a convenience store after a DNA test proved I was the one who clogged their toilet last month.
They're Now Using Sophisticated AI To Fight ... Livestream Pirates?
Paying for a live broadcast can be a gamble. You might witness a memorable event, or you may end up paying $60 to watch a dude catch a stray blow to the face and then crumple to the mat two minutes into the fight. So some people decide to pirate that shit, and is there any way to combat someone pointing their camera at WrestleMania and livestreaming it to all of their Facebook followers? Clearly we need to be putting our most advanced technology on this problem.
This is where artificial intelligence comes in. Pirates are always looking for ways to get around the crude algorithms that simply look for the exact broadcast that should be behind a paywall. Thus, AI has to be able to constantly look for subtle signs so it can shut 'em down. For instance, it used to be easy enough for a program to just search for the logo of a network on somebody's feed. If it could detect the UFC logo, boom, the broadcast could be shut down. But of course, pirates figured that out and started blurring or blocking the logos.
Now, though, AI is able to look deeper at a broadcast. Say someone is pirating a World Cup match. The network logo is blurred, but the game is not. The AI is being taught to determine what stadium it's looking at. If it's a stadium where a game is currently happening, that gets flagged. It can even scan boxers to recognize ones who are fighting right now with some facial recognition shenanigans. No telling how it performs in the ninth round, when a boxer's face looks more like crushed ham than man, but that's the magic of computers. They can figure it out.
Really, facial recognition software is what we should really be excited about when it comes to a magical future society in which no one can ever get away with anything ever. In fact ...
Facial Recognition Is Being Used To Stop Even The Most Trivial Of Crimes
I don't live there, so I'm happy to find out I'm wrong about this. But it sounds like you can't walk 20 paces without someone scanning your noggin to determine if you've been jaywalking or doing other unruly activities in China.
For example, cheating on exams has been a big issue. There's a test called the gaokao, which is a college entrance exam, and every year millions of students take it. It's nine hours of intense knowledge-barfing, and the stakes are high. So high that battling cheaters is basically its own job. Metal detectors wait at the doors, and drones will monitor the testing. And yes, people will hire surrogates to take the test for them. Or they used to, before facial recognition screwed that idea. Hard to cheat if a computer can tell from the beginning that you're not a high school student but a 30-year-old ringer making 1,500 Yuan.
Cheating occurs in more places than the classroom, of course. China is also using facial recognition to nab fake marathon runners. That's a thing! When runners try to take shortcuts, the mountain of traffic cameras present are able to pinpoint who's doing it and flag them as dirty, unsportsmanlike chuckleheads. 237 of them were caught at a recent marathon. (Christ, was anyone not cheating?)
The technology is also being used to monitor gamers. The fact that you play a certain game can now be tied to your official government ID in a database. If you get too close to the screen, the game will blur so you have to move back. Children under 12 are required to limit their screen time to an hour per day, while minors over 12 only get two hours. This is all part of an effort to curb both gaming addiction and nearsightedness.
"Wow, that sounds like a Utopia in which all bad things have stopped happening," you say. But wait, it gets better. KFC now uses facial recognition to let you pay for your order. In fact, it will also recommend orders for you. You look like a popcorn chicken person. And also a jaywalker.
Lie Detection Is Getting Way Better ... And Boy Are They Finding Uses For It
If there's one thing daytime TV has taught me, it's that you ARE the father. Also, that lie detectors are flawless pieces of science that are able to extract the truth like a surgeon pulling a nickel from your nose cavity. Of course, this is only because daytime TV is full of crap. No matter what Maury Povich says, lie detectors are not reliable. On the one hand, you may say that's a good thing, since all of society would collapse if humans had to go even one day without lying. On the other hand, you should know that science is working feverishly to deploy better lie detection to all of the governments and corporations of the world.
EyeDetect is a program that's being used by literally hundreds of companies right now, as well as various federal, state, and local government agencies in the USA. It's basically a robot that interviews you for a job. No longer do HR people need to take your interview answers on faith. Instead they plop you down for 30 minutes in front of a screen that monitors the tiniest changes in your eyes as you answer questions. It's meant to be faster, cheaper, and more reliable than a traditional lie detector test, claiming 86 percent accuracy. That's probably good enough, right? For a thing that decides whether or not you should get a job?
Elsewhere in the world -- specifically, at some border crossings -- you'll find AVATAR. Neither the last airbender nor a horny blue cat person, AVATAR is an AI meant to screen people to determine if they're an unsavory type. It monitors your eyes, voice, gestures, and posture as you answer questions, and will flag you as suspicious if it thinks you're lying so that a human agent can then question you, in a process that I'm sure is pleasant for everyone involved. It's currently being used at the Canadian border, in the European Union, and by Homeland Security. And with a whopping 60-75 percent accuracy, it's also being looked at for, you guessed it, refugee screening.
But really, do any of us have anything to fear from any of the technology on this list? I can't see what it would be, as long as you're not one of the, oh, 100 percent of humans who have something to hide.
Support our site with a visit to our Contribution Page. Please and thank you.
For more, check out Five Horrifying Secret Rules Of Life in A Movie Universe - After Hours:
We decree that you must follow us on Facebook ... or not. Up to you.