Twitter (And Others) Still Having Issues With Bigoted AI Cropping
If you've ever tweeted an image out and watched it get pared down to something unrecognizable, you know there are problems with Twitters' cropping AI. What you may not know is that there are also sexist and racist biases in that AI- biases that Twitter is trying desperately to get rid of. In October 2020, tweets started to go viral by pointing out that Twitter's image cropping algorithm isn't just annoying (which it absolutely is); it also prioritizes lighter skin and tends to crop out women's heads in favor of their bodies.
Why Did Twitter Make Biased AI?
The short answer is: they based it on biased people. In 2018 Twitter launched a new AI that would crop images based on their "saliency." What does saliency mean in this case? According to them, "The saliency algorithm works by estimating what a person might want to see first within a picture." Their own analysis found that the algorithm tends to crop in favor of whiteness and to crop men out in favor of women. Twitter says that part of this bias comes from the training data that the AI is using to define saliency, although they admit that isn't the whole problem.
Surely They've Fixed it By Now?
Hahahaha, ah, yeah, that'd be nice, right? In March 2021, Twitter switched to allowing people to crop their own photos- a really good idea that makes so much sense, costs them nothing, and is what everyone wants. But then they thought- no, no. Let's go back to AI. And so they have, but this time they announced a bias bounty. Basically, they'd pay researchers money if the researchers could prove that the Twitter algorithm had some kind of bias.
Spoiler alerts: it does. The winning entry proved that the saliency algorithm tends to crop in favor of stereotypical beauty norms and used this awesome series of photos as an example:
Another researcher proved the algorithm is so transparently racist it even preferred lighter-skin emojis.
Twitter said they were delighted by the results even though they implied that "biases seem to be embedded in the core saliency model." Seems like a weird thing to be delighted about, but they seem delighted that people actually took the time to investigate their biased AI.
Twitter announced they're "already working towards no longer using saliency-based cropping on Twitter," but noted that "saliency modeling is not unique to Twitter, and there are likely many places where saliency is still in use today. "I'm not sure how hard it is to, let me make sure I have this right, "stop using AI" and instead "just let me crop my own images," but considering the Photos app does a decent job of it, I think Twitter's Machine Learning team can figure it out.
Twitter is definitely right when they say that they're not the only company using saliency models, and they're not the only app that's suffering because of it. Instagram has a long history of stifling women's posts about sexuality, even banning a woman for reporting men who sent her dick pics. Instagram's history with algorithmic sexism is even longer than Twitter's, and instead of being focused on image cropping, it has to do with what content is even allowed on the app.
Meanwhile, TikTok is so plagued by discrimination claims they seem to think promoting lighter-skinned content creators is a feature, not a bug. I mean, while all of this was kicking off with the old social media, TikTok was actually telling people to suppress the poor and ugly. As someone who's poor and ugly on TikTok, I can confirm that this policy is still enacted.
Top Image: Pexels/Pixabay