In most cases, researchers aren't teaching AI to do something. They're teaching it how to teach itself to do something. Like how researchers at MIT are making an algorithm watch hundreds of hours of YouTube videos and shows like The Office, The Big Bang Theory, and Desperate Housewives (among others) in an apparent attempt to convince it that it's unemployed but still has its ex-girlfriend's Netflix login. Their goal is to get the AI to accurately predict how people will greet each other one second before it happens.
By making the algorithm watch 600 hours of footage, all the researchers are doing is turning it into the friend who's undeservedly proud when their predictions of an easily-predictable movie come true. "Those two dudes look like they're going to shake han-AHH THEY DID IT, TOLD YOU!" That AI thinks it's so smart because people on TV aren't wildcards disrupting the world of greetings by smearing pudding on each other's faces or slapping together baby crocodiles. We shake hands, we kiss, we high-five, we hug. If you're doing anything else, you're overthinking it.